Not sure if we want to support both ops and solids config in the same config document, or if we should enforce one or the other to avoid confusion.
i kind of hate it for reasons i cant express - but i think its the right impl if we want to support this
the error paths in the config errors being out of sync is going to feel pretty rough
I am pretty sure this current approach isn't going to work. We may need to add custom support to the config system to handle this flexibility only at the nodes in the config tree where we should
|280–281 ↗||(On Diff #41482)|
i think the existing test cases work because once "ops" is seen as a key we stop the recursion / transform. You should be able to get one of your current tests to fail by getting ops out preventing this early exit. You should also be able to create a failing test around with nested @graph structure where this early term would prevent a needed transformation
If we want to go back to the previous direction, we could walk / refer to the pipeline  structure when transforming the config.
The benefit this current direction has is providing something that we could be sent over graphql to dagit to allow the frontend to more gracefully handle these field aliases.
for what its worth I think this direction could get to a better feeling place with some iteration
we could change the impl for "storage" fallback to this same scheme (if we go in this direction)
|147 ↗||(On Diff #41482)|
If you think we can get to a better feeling place with it, I'm down to explore that. Hadn't considered the graphql/dagit benefits. Given that I guess an implementation such as this can generally help with backcompat for config + has a real use case, it makes me feel a bit less fatalistic. It definitely is a ton of machinery, though.
Did you have any specific ideas for iteration?
- I think field_aliases is better than field_substitutions since we are just creating a second name that the same information is available under
- related - I think you want to just swap the order since you are flipping it in like all the call sites
- use this approach to handle storage compat
- add tests at the Shape level in the config package
- add tests for conflicts (error I assume)
- take a minute and look at the context around each change to verify the impl feels right. I think the dict get with default trick you used was good to get it working, but there may be cleaner approaches.
for example here defined_field_names should just consist of the full set of possible names instead of threading through the aliases
unless you update this to detect conflicts then it makes more sense to pass it
here i think a small amount of code redundancy leads to something more readable
this is personal, but i think its worth trying to take a second pass
for field_snap in context.config_type_snap.fields: name = field_snap.name if name in config_value: field_evr = _validate_config(context.for_field_snap(field_snap), config_value[name]) if field_evr.errors: field_errors += field_evr.errors # check for backcompat field name elif name in field_aliases and field_aliases[name] in config_value: field_evr = _validate_config(context.for_field_snap(field_snap), config_value[field_alias[name]]) if field_evr.errors: field_errors += field_evr.errors
alias collision probably
nit: i think this is hard to discern whats going on, more simpler if blocks may be clearer. I would say at least add comments, but thinking about what the comments would say makes be believe simpler & potentially redundant is clearer
is this right? if feel like a missing field without an alias causes this to short circuit the wrong way
mention alias in here probably "field name and its alias alised_name"
unrelated: should this not have an else: case to just use the original field ? probably need more tests for this
I dont think its wise to change these existing test cases in place - would be better to add new separate ones (they can re-use the base)
does a shape with aliases have the same identity as one without? should it? It feels odd in  that we ignore the aliases
Additionally, having trouble getting the snapshot tests to cooperate. pytest --snapshot-update does not seem to be doing the trick.
This points at some bug or bad assumption in the hashing or snapshotting code around this change.