Jul 13 2021
Jul 12 2021
i think this is a fine place to start
this sort of dances around the graph -> job coercion bit, which is admittedly hard to thread the needle on. Maybe when Job is introduced you could mention the graph case works by coercing in to one or something like that? Maybe something for a follow up iteration.
this behavior should be under test, something in python_modules/dagit/dagit_tests/test_app.py maybe
Jul 9 2021
Jul 8 2021
ah only in the celery worker interesting
assuming the test failed before and works now, im good with this
why don't we use DagsterKubernetesClient for this k8s stuff?
Jul 7 2021
any and all copy edits / suggestions welcome - especially *
concern with this approach is accidentally getting dupe events if anyone returns an event that was already logged (ie it was created with one of the static constructors that logs)
We may want to consider the same approach for D8629.
you can abandon this - I have one going and will steal the good parts from here
defer to @prha
i think this is directionally promising - but has too many risky externalities to land for tomorrows release.
would we also have a job.execute_in_process, or would someone be expected to decompose down to the graph?
should this and D8629 be merged? that way its a single migration instead of 2 ?
I faintly remember some weird shit around statefulness of how pre-commit worked when we had it set up but im struggling to remember the details
Did you look in to setting compression at the grpc level for requests & responses? That could allow us to make this change at a more appropriate layer, without breaking
dagit test flake
Jul 6 2021
im cool with this - last pass from @sandyryza for api sign off