Back to Search Start Over

Towards a Benchmark for Learned Systems

Authors :
Andreas Kipf
Ryan Marcus
Tim Kraska
Laurent Bindschaedler
Umar Farooq Minhas
Source :
ICDE Workshops
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

This paper aims to initiate a discussion around benchmarking data management systems with machine-learned components. Traditional benchmarks such as TPC or YCSB are insufficient to analyze and understand these learned systems because they evaluate the performance under a stable workload and data distribution. Learned systems automatically specialize and adapt database components to a changing workload, database, and execution environment, thereby making conventional metrics such as average throughput ill-suited to understand their performance fully. Moreover, the standard cost-per-performance metrics fail to account for essential trade-offs related to the training cost of models and the elimination of manual database tuning. We present several ideas for designing new benchmarks that are better suited to evaluate learned systems. The main challenges entail developing new metrics to capture the particularities of learned systems and ensuring that benchmark results remain comparable across many deployments with wide-ranging designs.

Details

Database :
OpenAIRE
Journal :
2021 IEEE 37th International Conference on Data Engineering Workshops (ICDEW)
Accession number :
edsair.doi...........4870ca20ca999891fe8048b067e4bf87
Full Text :
https://doi.org/10.1109/icdew53142.2021.00029