I got frustrated with all the code that was required to define a basic lakehouse. This simplifies the storage interface - a storage now just saves and loads stuff. For the multi-type case where we have different ways of saving / loading Spark DFs vs. Pandas DFs, there's a special storage implementation that dispatches to sub-storages.
ya i think this is a bit better. Seems like scaling to lots of types might have some rough ergonomics but I'm sure this isn't the last revision of how the lakehouse storage stuff is set up