Remove all use of PipelineDefinition / ReconstructablePipeline in the CLI path, so that we can use it with GRPC servers too.
I think the only last test failure here indicates that something is not working correctly with the ComputeLogManager in the BK test environment (it's not picking up stdout/stderr logs during step execution)
hmm dagster pipeline execute firing a pipeline execution in an un-managed grpc server is a bit worrisome. In our deployed cases the assumption is the grpc server acts as metadata-reads-only and all executions happen via run launcher. I'm not certain that dagster pipeline execute shouldn't boot up in a user process context directly and execute right there. I guess its a little weird if you are choosing out of a workspace that declares a different python env for that pipeline
we could also add a method on RunLauncher that executes the run synchronously? (instead of the repo location, I just put it there since that's where it was before - there's even still a lingering broken callsite in dagster-graphql)
I assumed that's what the semantics were here - launch starts it asynchronously, execute runs it synchronously, both are host processes.
ya in my head dagster pipeline execute is for "just execute this pipeline right here in my terminal so i can debug it / see what happens", which might not be *right* but is what gives me pause here
This looks very good overall. I just wanted to make sure you saw the feedback around getting rid of DagsterInstance.get() calls. I could be missing something, but seems possible to eliminate them here
can we not make the dagster instance required for all of these while you are in here? I would really, really like to remove most calls to DagsterInstance.get() in the codebase (ideally just at the top of tool entry points). Seems like it shouldn't be too hard just to ensure it is threaded by all callsites?
it seems we have the opportunity here to have shared logic between this and other calls sites (for example in dagster-graphql). not blocking for this diff, but something to investigate
can we name this external_pipeline_subset or something to distinguish it from the original
the only reason to pass this in is for tests, but there are other ways to stub in DagsterInstance values (like writing to dagster.yaml in the test filesystem).
feedback - re:DagsterInstance.get(), my last piece of work for 0.9 task is a big refactor of the test that calls execute_execute_command in a bunch of places, so I'll do that, but roll it into that change coming later today to avoid conflict hell, if that's alright