Bächle, SebastianSchmidt, KarstenHärder, TheoLehner, WolfgangMitschang, BernhardSchöning, HaraldSchwarz, Holger2019-01-172019-01-172011978-3-88579-274-1https://dl.gi.de/handle/20.500.12116/19577Buffer memory allocation is one of the most important, but also one of the most difficult tasks of database system administration. Typically, database management systems use several buffers simultaneously for various reasons, e.g., disk speed, page size, access behavior. As a result, available main memory is partitioned among all buffers within the system to suit the expected workload, which is a highly complex optimization problem. Even worse, a carefully adjusted configuration can become inefficient very quickly on workload shifts. Self-tuning techniques automatically address this allocation problem using periodic adjustments of buffer sizes. The tuning itself is usually achieved by changing memory (re-)allocations based on hit/miss ratios, thereby aiming at minimization of I/O costs. All techniques proposed so far observe or simulate the buffer behavior to make forecasts whether or not increased buffer sizes are beneficial. However, database buffers do not scale uniformly (i.e., in a linear fashion) and simple extrapolations of the current performance figures can easily lead to wrong assumptions. In this work, we explore the use of lightweight extensions for known buffer algorithms to improve the forecast quality by identifying the effects of varying buffer sizes using simulation. Furthermore, a simple cost model is presented to optimize dynamic memory assignments based on these forecast results.enLightweight performance forecasts for buffer algorithmsText/Conference Paper1617-5468