1. Delta lake
- Author
-
Burak Yavuz, Adrian Ionescu, Matei Zaharia, Pieter Senster, Takuya Ueshin, Shixiong Zhu, Mukul Murthy, Tathagata Das, Herman van Hovell, Reynold Xin, Ali Ghodsi, Liwen Sun, Mostafa Mokhtar, Sameer Paranjpye, Alicja Łuszczak, Peter Boncz, Michał Szafrański, Xiao Li, Michael Armbrust, Michał Świtakowski, and Joseph Torres
- Subjects
0303 health sciences ,Database ,Computer science ,business.industry ,General Engineering ,Cloud computing ,02 engineering and technology ,computer.software_genre ,Object (computer science) ,Data warehouse ,Metadata ,03 medical and health sciences ,Consistency (database systems) ,ComputingMethodologies_PATTERNRECOGNITION ,020204 information systems ,Transaction log ,Spark (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Table (database) ,business ,computer ,030304 developmental biology - Abstract
Cloud object stores such as Amazon S3 are some of the largest and most cost-effective storage systems on the planet, making them an attractive target to store large data warehouses and data lakes. Unfortunately, their implementation as key-value stores makes it difficult to achieve ACID transactions and high performance: metadata operations such as listing objects are expensive, and consistency guarantees are limited. In this paper, we present Delta Lake, an open source ACID table storage layer over cloud object stores initially developed at Databricks. Delta Lake uses a transaction log that is compacted into Apache Parquet format to provide ACID properties, time travel, and significantly faster metadata operations for large tabular datasets (e.g., the ability to quickly search billions of table partitions for those relevant to a query). It also leverages this design to provide high-level features such as automatic data layout optimization, upserts, caching, and audit logs. Delta Lake tables can be accessed from Apache Spark, Hive, Presto, Redshift and other systems. Delta Lake is deployed at thousands of Databricks customers that process exabytes of data per day, with the largest instances managing exabyte-scale datasets and billions of objects.
- Published
- 2020