Auflistung nach Autor:in "Graefe, Goetz"
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAlgorithms for merged indexes(Datenbanksysteme in Business, Technologie und Web (BTW 2007) – 12. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme" (DBIS), 2007) Graefe, Goetz
- KonferenzbeitragEfficient verification of B-tree integrity(Datenbanksysteme in Business, Technologie und Web (BTW) – 13. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme" (DBIS), 2009) Graefe, Goetz; Stonecipher, RyanThe integrity of B-tree structures can become compromised for many reasons. Since these inconsistencies manifest themselves in unpredictable ways, all commercial database management systems include mechanisms to verify the integrity and trustworthiness of
- KonferenzbeitragExecuting nested queries(BTW 2003 – Datenbanksysteme für Business, Technologie und Web, Tagungsband der 10. BTW Konferenz, 2003) Graefe, GoetzOptimization of nested queries, in particular finding equivalent "flattened" queries for queries that employ the SQL sub-query construct, has been researched extensively. In contrast, with the exception of nested loops join, execution of nested plans has found little interest. Nested execution plans may result from a failure to flatten nested SQL expressions but just as likely are created by a query optimizer to exploit all available indexes as effectively as possible. In fact, if materialized views and index tuning perform as expected, few queries should require large operations such as parallel scans, sorts and hash joins, and most actual query plans will rely entirely on navigating indexes on tables and views. Note that only index navigation plans scale truly gracefully, i.e., perform equally well on large and on very large databases, whereas sorting and hashing scale at best linearly. Since a typical index navigation plan employs nested iteration, this paper describes techniques to execute such plans efficiently as well as means to cleanly implement these techniques. Taken together, these techniques can improve query performance by orders of magnitude, giving them obvious practical importance.
- KonferenzbeitragA generalized join algorithm(Datenbanksysteme für Business, Technologie und Web (BTW), 2011) Graefe, GoetzDatabase query processing traditionally relies on three alternative join algorithms: index nested loops join exploits an index on its inner input, merge join exploits sorted inputs, and hash join exploits differences in the sizes of the join inputs. Cost-based query optimization chooses the most appropriate algorithm for each query and for each operation. Un- fortunately, mistaken algorithm choices during compile-time query optimization are common yet expensive to investigate and to resolve. Our goal is to end mistaken choices among join algorithms by replacing the three traditional join algorithms with a single one. Like merge join, this new join algorithm exploits sorted inputs. Like hash join, it exploits different input sizes for unsorted inputs. In fact, for unsorted inputs, the cost functions for recursive hash join and for hybrid hash join have guided our search for the new join algorithm. In consequence, the new join algorithm can replace both merge join and hash join in a database management system. The in-memory components of the new join algorithm employ indexes. If the database contains indexes for one (or both) of the inputs, the new join can exploit persistent indexes instead of temporary in-memory indexes. Using database indexes to match input records, the new join algorithm can also replace index nested loops join. Results from an implementation of the core algorithm are reported.
- KonferenzbeitragHierarchical locking in B-tree indexes(Datenbanksysteme in Business, Technologie und Web (BTW 2007) – 12. Fachtagung des GI-Fachbereichs "Datenbanken und Informationssysteme" (DBIS), 2007) Graefe, Goetz
- ZeitschriftenartikelInstant recovery with write-ahead logging(Datenbank-Spektrum: Vol. 15, No. 3, 2015) Härder, Theo; Sauer, Caetano; Graefe, Goetz; Guy, WeyInstant recovery improves system availability by reducing the mean time to repair, i.e., the interval during which a database is not available for queries and updates due to recovery activities. Variants of instant recovery pertain to system failures, media failures, node failures, and combinations of multiple failures. After a system failure, instant restart permits new transactions immediately after log analysis, before and concurrent to “redo” and “undo” recovery actions. After a media failure, instant restore permits new transactions immediately after allocation of a replacement device, before and concurrent to restoring backups and replaying the recovery log.Write-ahead logging is already ubiquitous in data management software. The recent definition of single-page failures and techniques for log-based single-page recovery enable immediate, lossless repair after a localized wear-out in novel or traditional storage hardware. In addition, they form the backbone of on-demand “redo” in instant restart, instant restore, and eventually instant failover. Thus, they complement on-demand invocation of traditional single-transaction “undo” or rollback.In addition to these instant recovery techniques, the discussion introduces self-repairing indexes and much faster offline restore operations, which impose no slowdown in backup operations and hardly any slowdown in log archiving operations. The new restore techniques also render differential and incremental backups obsolete, complete backup commands on a database server practically instantly, and even permit taking full up-to-date backups without imposing any load on the database server.
- KonferenzbeitragKeynote: the story of instant recovery(Datenbanksysteme für Business, Technologie und Web (BTW) 2013 - Workshopband, 2013) Graefe, Goetz
- KonferenzbeitragLogical recovery from single-page failures(Datenbanksysteme für Business, Technologie und Web (BTW) 2021, 2013) Graefe, Goetz; Seeger, BernhardModern hardware technologies and ever-increasing data sizes increase probability and frequency of local storage failures, e.g., unrecoverable read errors on individual disk sectors or pages on flash storage. Our prior work has formalized singlepage failures and outlined efficient methods for their detection and recovery. These prior techniques rely on old backup copies of individual pages, e.g., as part of a database backup or as old versions retained after a page migration. Those might not be available, however, e.g., after recent index creation in “non-logged” or “allocation-only logging” mode, which industrial database products commonly use. The present paper introduces techniques for single-page recovery without backup copies, e.g., pages of new indexes created in allocation-only logging mode. By rederiving lost contents of individual pages, these techniques enable efficient recovery of data lost due to damaged storage structures or storage devices. Recovery performance depends on the size of the failure and of the required data sources; it is independent of the sizes of device, index structure, etc.
- KonferenzbeitragOrthogonal key-value locking(Datenbanksysteme für Business, Technologie und Web (BTW 2015), 2015) Graefe, Goetz; Kimura, HideakiB-trees have been ubiquitous for decades; and over the past 20 years, record-level locking has been ubiquitous in b-tree indexes. There are multiple designs, each a different tradeoff between (i) high concurrency and a fine granularity of locking for updates, (ii) efficient coarse locks for equality and range queries, (iii) run time efficiency with the fewest possible invocations of the lock manager, and (iv) conceptual simplicity for efficient development, maintenance, and testing. A new design introduced here is efficient and simple yet supports both fine and coarse granularities of locking. A lock request may cover (i) a gap (open interval) between two (actual) key values, (ii) a key value with its entire list of (actual and possible) row identifiers (in a non-unique secondary index), (iii) a specific pair of key value and row identifier, or (iv) a distinct key value and a fraction of all (actual and possible) row identifiers. Using specific examples such as insertions, deletions, equality queries, and phantom protection, case studies compare four prior b- tree locking techniques and the new one. Experiments show that the new technique reduces the number of lock requests yet increases transactional concurrency, improving transaction throughput for both read-only and read-write transactions. The case studies and the experiments suggest that new b-tree implementations as well as existing ones ought to adopt the new locking techniques.
- KonferenzbeitragPanel: “One size fits all”: an idea whose time has come and gone?(Datenbanksysteme für Business, Technologie und Web (BTW), 2011) Dittrich, Jens; Färber, Franz; Graefe, Goetz; Loeser, Henrik; Reimann, Wilfried