For servers you run themselves, this gives you a way to signal over graphql that they should be shut down (and in k8s or docker-compose with the right configuration, spun back up)
BK (adding coverage now)
If this happened enough on k8s via CD or something, k8s might go into crashloopbackoff. Looks like that's not tunable https://github.com/kubernetes/kubernetes/issues/57291. But users probably shouldn't hit it...
If they do, it's an exponential backoff capped at 5 min and clears after 10 min of successful running. Seems ok
It might also be worth mentioning in docs that this is only relevant for non-traditional k8s setups. Normally we expect that users will build a new pipeline image with a new tag, run helm upgrade, and dagit will detect the new server id. This solution is to enable either pushing new images with the same tag (requires pullPolicy Always) or setups where the pipeline count is mounted in to the container, so it can change at any time
A comment would help me, not sure what's happening here
Is this a safe assumption to make?