User Details
- User Since
- Aug 5 2019, 9:56 PM (88 w, 5 d)
- Roles
- Administrator
Fri, Apr 16
I think this is good. We might eventually want to fork a process for each evaluation, and then change the timeout based on the min interval? But better to make this change and then think through the ramifications of the other stuff.
Thu, Apr 15
Wed, Apr 14
Tue, Apr 13
You have to wait a little bit longer, but it's nice because it's using normal HTML instead of some popup shenanigans.
@sandyryza, we have titles.... do you think we need tooltips? Or were you thinking that they'd have a longer text description?
I think this makes sense. At first it was a little jarring to have the selector return a SkipReason, but I think we can get by if we have this documented reasonably.
Mon, Apr 12
update
Fri, Apr 9
Thu, Apr 8
This is good.
Wed, Apr 7
Consider should_autocreate_tables instead of autocreate_tables?
Tue, Apr 6
- switch to button from selector
Mon, Apr 5
It's definitely a little weird.
Fri, Apr 2
Thu, Apr 1
Wed, Mar 31
Tue, Mar 30
Mon, Mar 29
rebase, kick off new bk
we need the timestamp, which is only on the event record, not the materialization
- add library tests
Fri, Mar 26
add schedule streaming
add backfill partition set config
Motivation I think might be related to commercialization? https://github.com/dagster-io/dagster/issues/3935
yeah, actually not sure about breaking changes.... should I first add a separate endpoint with the streaming response, then switch the client in a later release?
Thu, Mar 25
Wed, Mar 24
As I was writing this diff up, I was even kind of thinking that there's no reason not to make this streaming call the default implementation (unless I'm missing something) for all grpc calls. Any particular grpc request could plausible return a problematically large payload, and we'd have to think about how to handle how to chop that up.
I kinda viewed this as more of a "how do we get around the max request size" problem rather than a logical grouping problem. This solves both the large number of run requests problem as well as the not that many run requests but *really* large config.
I'm sad we have to think about this.... we may need to consider adding some disclaimers on some of the numeric graph displays precision, but could be a later-problem