Logo des Repositoriums
 
Textdokument

Efficient Data-Parallel Cumulative Aggregates for Large-Scale Machine Learning

Lade...
Vorschaubild

Volltext URI

Dokumententyp

Zusatzinformation

Datum

2019

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Quelle

Verlag

Gesellschaft für Informatik, Bonn

Zusammenfassung

Cumulative aggregates are often overlooked yet important operations in large-scale machine learning (ML) systems. Examples are prefix sums and more complex aggregates, but also preprocessing techniques such as the removal of empty rows or columns. These operations are challenging to parallelize over distributed, blocked matrices—as commonly used in ML systems—due to recursive data dependencies. However, computing prefix sums is a classic example of a presumably sequential operation that can be efficiently parallelized via aggregation trees. In this paper, we describe an efficient framework for data-parallel cumulative aggregates over distributed, blocked matrices. The basic idea is a self-similar operator composed of a forward cascade that reduces the data size by orders of magnitude per iteration until the data fits in local memory, a local cumulative aggregate over the partial aggregates, and a backward cascade to produce the final result. We also generalize this framework for complex cumulative aggregates of sum-product expressions, and characterize the class of supported operations. Finally, we describe the end-to-end compiler and runtime integration into SystemML, and the use of cumulative aggregates in other operations. Our experiments show that this framework achieves both high performance for moderate data sizes and good scalability.

Beschreibung

Boehm, Matthias; Evfimievski, Alexandre; Reinwald, Berthold (2019): Efficient Data-Parallel Cumulative Aggregates for Large-Scale Machine Learning. BTW 2019. DOI: 10.18420/btw2019-17. Gesellschaft für Informatik, Bonn. PISSN: 1617-5468. ISBN: 978-3-88579-683-1. pp. 267-286. Wissenschaftliche Beiträge. Rostock. 4.-8. März 2019

Zitierform

Tags