Tuning Cassandra through Machine Learning
dc.contributor.author | Eppinger, Florian | |
dc.contributor.author | Störl, Uta | |
dc.contributor.editor | König-Ries, Birgitta | |
dc.contributor.editor | Scherzinger, Stefanie | |
dc.contributor.editor | Lehner, Wolfgang | |
dc.contributor.editor | Vossen, Gottfried | |
dc.date.accessioned | 2023-02-23T13:59:59Z | |
dc.date.available | 2023-02-23T13:59:59Z | |
dc.date.issued | 2023 | |
dc.description.abstract | NoSQL databases have become an important component of many big data and real-time web applications. Their distributed nature and scalability make them an ideal data storage repository for a variety of use cases. While NoSQL databases are delivered with a default ”off-the-shelf” configuration, they offer configuration settings to adjust a database’s behavior and performance to a specific use case and environment. The abundance and oftentimes imperceptible inter-dependencies of configuration settings make it difficult to optimize and performance-tune a NoSQL system. There is no one-size-fits-all configuration and therefore the workload, the physical design, and available resources need to be taken into account when optimizing the configuration of a NoSQL database. This work explores Machine Learning as a means to automatically tune a NoSQL database for optimal performance. Using Random Forest and Gradient Boosting Decision Tree Machine Learning algorithms, multiple Machine Learning models were fitted with a training dataset that incorporates properties of the NoSQL physical configuration (replication and sharding). The best models were then employed as surrogate models to optimize the Database Management System’s configuration settings for throughput and latency using various Black-box Optimization algorithms. Using an Apache Cassandra database, multiple experiments were carried out to demonstrate the feasibility of this approach, even across varying physical configurations. The tuned Database Management System (DBMS) configurations yielded throughput improvements of up to 4%, read latency reductions of up to 43%, and write latency reductions of up to 39% when compared to the default configuration settings. | en |
dc.identifier.doi | 10.18420/BTW2023-04 | |
dc.identifier.isbn | 978-3-88579-725-8 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/40346 | |
dc.language.iso | en | |
dc.publisher | Gesellschaft für Informatik e.V. | |
dc.relation.ispartof | BTW 2023 | |
dc.relation.ispartofseries | Lecture Notes in Informatics (LNI) - Proceedings, Volume P-331 | |
dc.subject | AI for Database Systems | |
dc.subject | NoSQL | |
dc.subject | Machine Learning | |
dc.subject | Performance Modeling | |
dc.subject | Tuning | |
dc.subject | Black-box Optimization | |
dc.title | Tuning Cassandra through Machine Learning | en |
dc.type | Text/Conference Paper | |
gi.citation.endPage | 104 | |
gi.citation.publisherPlace | Bonn | |
gi.citation.startPage | 93 | |
gi.conference.date | 06.-10. März 2023 | |
gi.conference.location | Dresden, Germany |
Dateien
Originalbündel
1 - 1 von 1