I got frustrated with all the code that was required to define a basic lakehouse. This simplifies the storage interface - a storage now just saves and loads stuff. For the multi-type case where we have different ways of saving / loading Spark DFs vs. Pandas DFs, there's a special storage implementation that dispatches to sub-storages.
Closed by commit R1:686360086ee2: [lakehouse] RFC: bake policies into storage defs (authored by sandyryza). · Explain WhyJul 21 2020, 9:49 PM2020-07-21 21:49:16 (UTC+0)
This revision was automatically updated to reflect the committed changes.