I'm not sure how much this will decrease flakes, but I think it makes sense to test bc I think we currently just wait for postgres to be ready, by which time the rabbitmq broker probably is as well.
I don't think we check for this in normal deployments either. Celery workers won't start until the broker is up, but Dagit might try to send a message to it and fail. Not sure that's necessarily incorrect behavior though (as long as the error is surfaced well), it seems a bit odd to conditionally block Dagit's startup on Celery infra.
I originally thought this would solve this flake https://buildkite.com/dagster/dagster-diffs/builds/9191#fe82042a-31f4-45d5-b65e-6131f0eb0ef6/955-1020, however that error is taking place in the Celery core execution loop, so the run launcher was already created by a Celery task. I think this flake may be due creating additional pressure on the bk instance by creating another celery worker (https://github.com/dagster-io/dagster/issues/2671). A workaround for now is for both runs and steps to use the same queue which shouldn't deadlock as long as the worker has more processes than runs submitted.