1. Elastic scalable transaction processing in LeanXcale
- Author
-
Ricardo Jimenez-Peris, Diego Burgos-Sancho, Francisco Ballesteros, Marta Patiño-Martinez, Patrick Valduriez, LeanXcale [Madrid], Universidad Politécnica de Madrid (UPM), Laboratorio de Desarrollo de Sistemas de Navegación Sensorial y de Sistemas de Monitorización (SENiaLab - URJC), Universidad Rey Juan Carlos [Madrid] (URJC), Scientific Data Management (ZENITH), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), and Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)-Centre National de la Recherche Scientifique (CNRS)-Université de Montpellier (UM)
- Subjects
[INFO.INFO-DB]Computer Science [cs]/Databases [cs.DB] ,Hardware and Architecture ,NewSQL database system ,Scalability ,TPC-C ,Transaction processing ,Cloud ,Transaction management ,Elasticity ,Software ,Information Systems - Abstract
International audience; Scaling ACID transactions in a cloud database is hard, and providing elastic scalability even harder. In this paper, we present our solution for elastic scalable transaction processing in LeanXcale, an industrial-strength NewSQL database system. Unlike previous solutions, it does not require any hardware assistance. Yet, it does scales linearly to 100s of servers. LeanXcale supports non-intrusive elasticity and can move data partitions without hurting the quality of service of transaction management. We show the correctness of LeanXcale transaction management. Finally, we provide a thorough performance evaluation of our solution on Amazon Web Services (AWS) shared cloud instances. The results show linear scalability, e.g., 5 million TPC-C NewOrder TPM with 200 nodes, which is greater than the TPC-C throughput obtained by the 9th highest result in all history using dedicated hardware used exclusively (not shared like in our evaluation) for the benchmark. Furthermore, the efficiency in terms of TPM per core is double that of the two top TPC-C results (also the only results in a cloud).
- Published
- 2022
- Full Text
- View/download PDF