Page MenuHomePhabricator

[1/2] Configure celeryless deploy on helm

Authored by rexledesma on Thu, Oct 8, 12:39 AM.



If celery is disabled in helm values, we default to using the K8RunLauncher. Resolves

Added docs:

Test Plan

bk, push_to_k8s

Diff Detail

R1 dagster
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
Harbormaster returned this revision to the author for changes because remote builds failed.Thu, Oct 8, 2:45 AM
Harbormaster failed remote builds in B19301: Diff 23453!
442 ↗(On Diff #23453)

Why does this need to be added now?

442 ↗(On Diff #23453)

Since we add these two environment variables to the configmap, we need a way to expose them for use

445 ↗(On Diff #23581)


looks reasonable to me! can we add a test that involves spinning up a k8s-only deployment in kind and asserting that we can run a pipeline successfully?

This revision now requires changes to proceed.Mon, Oct 12, 9:50 PM

update integration tests to use new helm chart for celery less deploy

445 ↗(On Diff #23581)

Yeah - it seems like those two env variables are floating around with the same value. Is there a different between which to use?

445 ↗(On Diff #23581)

I think DAGSTER_K8S_PG_PASSWORD_SECRET is the name of the secret, DAGSTER_PG_PASSWORD_SECRET_KEY is the key of the secret, and DAGSTER_PG_PASSWORD_ENV_VAR is the value of the secret.

is DAGSTER_K8S_PG_PASSWORD_SECRET not set in the k8s jobs? i would expect it to be set already

use configmap instead of passing env vars down

looking good! few things (1) whats the reasoning behind adding "helm/dagster/templates/configmap-env-instance.yaml" instead of using "helm/dagster/templates/configmap-env-pipeline-run.yaml"? (2) in, could we assert that there are no celery worker pods and no step job pods? in a previous migration, there was a bad issue where the tests were no longer using the expected run launcher so would be nice to add a sanity check


I think these should be configurable via the helm chart. Also, would be good to document that setting celery.enabled: false will deploy the k8s run launcher. We should probably have a page for this in the docs, before we introduce the celeryk8s deployment here (

looking good! few things (1) whats the reasoning behind adding "helm/dagster/templates/configmap-env-instance.yaml" instead of using "helm/dagster/templates/configmap-env-pipeline-run.yaml"?

It's kinda weird to have the launcher rely on the pipeline run configmap on when it might not even be configured if the user decides to go with user deployments. It would work, because the only purpose is to pass down some shared env variables, but the naming is off.

Honestly, I don't think some of these env variables (other than dagster home and the pg secret) should exist - they should just be interpolated via go template rather than set in the configmap.

add assertion that no celery workers exists when not enabled in helm

add information to celery.enabled in values.yaml

sending back to your queue w/ a few things

  • I think that job_namespace, load_incluster_config, kubeconfig_file for the K8sRunLauncher should be configurable from the helm chart, similar to other system components
  • re "It's kinda weird to have the launcher rely on the pipeline run configmap on when it might not even be configured if the user decides to go with user deployments." -- when a user enables user deployments, the pipeline run confipmap can still be used by the K8sScheduler. I agree that the naming is a bit dated, but I would much prefer either using the env-pipeline-run configmap, renaming the env-pipeline-run configmap, or deprecating the env-pipeline-run configmap (probably in a separate diff?) instead of having two configmaps that are very similar.
  • wrt documentation, i generally prefer adding docs the same time a feature is released so that users can immediately have a place to learn more; its also a good time to reflect on the code before committing to see if anything could be improved, and guarantees that the documentation actually gets written instead of backlogged
This revision now requires changes to proceed.Fri, Oct 16, 8:26 PM

Proposing a slight rewording: The published helm chart also supports a celery-less deployment. In this case, we configure the <PyObject module="dagster_k8s" object="K8sRunLauncher" /> on the dagster instance, which will launch each pipeline (including all of its steps) in one Kubernetes Job. To use this option, <<insert steps here>>

We may want to mention that K8sRunLaunchers are compatible with the K8sScheduler and GRPC, but this is not the right place since those concepts are not yet introduced till the next part 2. On the other hand, maybe that's obvious to the reader.

I would also lean on the side of not specifying the compatible executors since this run launcher is compatible with all but the celery executors so it could be misleading?


can we also add k8srunlauncher config into the values.yaml?

rexledesma retitled this revision from Configure celeryless deploy on helm to [1/2] Configure celeryless deploy on helm.Tue, Oct 20, 11:08 PM
rexledesma marked an inline comment as done.


could we add a check for the run launcher being k8s in the integration test


should be updated given D4840

rexledesma added inline comments.

updated the docs in D4840

sweet! not sure if configmap-instance.yaml will merge cleanly, but let's make sure the version in D4840 gets into master

Macro pooh_eating_honey:

This revision is now accepted and ready to land.Thu, Oct 22, 8:30 PM
This revision was landed with ongoing or failed builds.Thu, Oct 22, 8:44 PM
This revision was automatically updated to reflect the committed changes.
rexledesma marked an inline comment as done.