Page MenuHomeElementl

Allow grpc servers to be shut down over graphql

Authored by dgibson on Jun 24 2021, 2:53 AM.
Referenced Files
Unknown Object (File)
Fri, Jun 2, 8:21 AM
Unknown Object (File)
Thu, Jun 1, 2:34 PM
Unknown Object (File)
Tue, May 16, 5:28 PM
Unknown Object (File)
Mon, May 15, 6:22 PM
Unknown Object (File)
Thu, May 11, 11:51 PM
Unknown Object (File)
Thu, May 11, 11:51 PM
Unknown Object (File)
Apr 28 2023, 10:04 PM
Unknown Object (File)
Apr 26 2023, 7:51 AM



For servers you run themselves, this gives you a way to signal over graphql that they should be shut down (and in k8s or docker-compose with the right configuration, spun back up)

Test Plan

BK (adding coverage now)

Diff Detail

R1 dagster
shutdownapi (branched from master)
Lint Passed
No Test Coverage

Event Timeline


If this happened enough on k8s via CD or something, k8s might go into crashloopbackoff. Looks like that's not tunable But users probably shouldn't hit it...
If they do, it's an exponential backoff capped at 5 min and clears after 10 min of successful running. Seems ok

It might also be worth mentioning in docs that this is only relevant for non-traditional k8s setups. Normally we expect that users will build a new pipeline image with a new tag, run helm upgrade, and dagit will detect the new server id. This solution is to enable either pushing new images with the same tag (requires pullPolicy Always) or setups where the pipeline count is mounted in to the container, so it can change at any time


A comment would help me, not sure what's happening here


Is this a safe assumption to make?

This revision is now accepted and ready to land.Jun 24 2021, 5:48 PM

@johann A third use case (Which i can call out more explicitly) is if the artifacts are generated from an external DB or something and you still want to reload the server even though the actual python code hasn't changed

This revision was landed with ongoing or failed builds.Jun 24 2021, 7:09 PM
This revision was automatically updated to reflect the committed changes.