GPU-based regression analysis on sparse grids
dc.contributor.author | Hirschmann, Steffen | |
dc.contributor.editor | Plödereder, E. | |
dc.contributor.editor | Grunske, L. | |
dc.contributor.editor | Schneider, E. | |
dc.contributor.editor | Ull, D. | |
dc.date.accessioned | 2017-07-26T11:00:00Z | |
dc.date.available | 2017-07-26T11:00:00Z | |
dc.date.issued | 2014 | |
dc.description.abstract | Prediction and forecasting has become very important in modern society. Regression analysis enables to predict easily based on given data. This paper focuses on regression analysis on sparse grids using the existing toolbox Sparse Grid ++ (SG++). The core workload of the regression analysis will be implemented on graphics cards using NVIDIA's Compute Unified Device Architecture (CUDA). Therefore, we give guidance how to get high performance when dealing with this particular problem using CUDA enabled graphics cards. We also focus on problems where the datasets are larger than the available device memory. Finally, we present test results for real-world and artificial datasets. | en |
dc.identifier.isbn | 978-3-88579-626-8 | |
dc.identifier.pissn | 1617-5468 | |
dc.language.iso | en | |
dc.publisher | Gesellschaft für Informatik e.V. | |
dc.relation.ispartof | Informatik 2014 | |
dc.relation.ispartofseries | Lecture Notes in Informatics (LNI) - Proceedings, Volume P-232 | |
dc.title | GPU-based regression analysis on sparse grids | en |
dc.type | Text/Conference Paper | |
gi.citation.endPage | 2436 | |
gi.citation.publisherPlace | Bonn | |
gi.citation.startPage | 2425 | |
gi.conference.date | 22.-26. September 2014 | |
gi.conference.location | Stuttgart |
Dateien
Originalbündel
1 - 1 von 1