Auflistung PARS-Mitteilungen 2012 nach Erscheinungsdatum
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelA Speed-Up Study for a Parallelized White Light Interferometry Preprocessing Algorithm on a Virtual Embedded Multiprocessor System(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Schoenwetter, Dominik; Schneider, Max; Fey, DietmarParallel computing has been a niche for scientific research in academia for decades. However, as common industrial applications become more and more performance demanding and raising the clock frequency of conventional single-core systems is hardly an option due to reaching technological limitations, efficient use of (embedded) multi-core CPUs and many-core platforms has become imperative. 3D surface analysis of objects using the white light interferometry presents one of such challenging applications. The goal in this article is to get an impression which speed-up for an established and parallelized white light interferometry preprocessing algorithm, called Contrast Method, is possible on an embedded system that works without any operating system. Therefore, we decided to use a virtual environment that is able to simulate embedded multi-core as well as many-core systems and that enables running real application code on the designed system. The results show, that a significant speed-up is possible when using a many-core platform, instead of a design that only implements one single core, if the algorithm is parallelized for getting full advantage of the manycore design. Furthermore, an acceptable absolute run time is achievable.
- ZeitschriftenartikelResilient data encoding for fault-prone signal transmission in parallelized signed-digit based arithmetic(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Neuhäuser, David; Zehendner, EberhardWhen arithmetic components are parallelized, fault-prone interconnections can tamper results significantly. Constantly progressing technology scaling leads to a steady increase of errors caused by faulty transmission. Resilient data encoding schemes can be used to offset these negative effects. Focusing on parallel signed-digit based arithmetic frequently used in high-speed systems, we propose suitable data encodings that reduce error rates by 25%. Data encoding should be driven by the occurrence probabilities of digits. We develop a methodology to obtain these probabilities, show an example fault-tolerant encoding, and discuss its impact on communicating parallel arithmetic circuits in an example error scenario.
- Zeitschriftenartikel1. Aktuelle und zukünftige Aktivitäten (Bericht des Sprechers)(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012)
- ZeitschriftenartikelThe Spectral Relation between the Cube-Connected Cycles and the Shuffle-Exchange Network(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Riess, Christian; Strehl, Volker; Wanka, RolfWe investigate the relation between the spectral sets (i. e., the sets of eigenvalues, disregarding multiplicities) of two d-dimensional networks popular in parallel computing: the Cube-Connected Cycles network CCC(d) and the Shuffle-Exchange network SE(d). We completely characterize their spectral sets. Additionally, it turns out that for any odd d, the SE(d)-eigenvalues set is precisely the same as the CCC(d)eigenvalues set. For any even d, however, the SE(d)-eigenvalues form a proper subset of the set of CCC(d)-eigenvalues.
- ZeitschriftenartikelParallel coding for storage systems — An OpenMP and OpenCL capable framework(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Sobe, PeterParallel storage systems distribute data onto several devices. This allows high access bandwidth that is needed for parallel computing systems. It also improves the storage reliability, provided erasure-tolerant coding is applied and the coding is fast enough. In this paper we assume storage systems that apply data distribution and coding in a combined way. We describe, how coding can be done parallel on multicore and GPU systems in order to keep track with the high storage access bandwidth. A framework is introduced that calculates coding equations from parameters and translates them into OpenMP- and OpenCL-based coding modules. These modules do the encoding for data that is written to the storage system, and do the decoding in case of failures of storage devices. We report on the performance of the coding modules and identify factors that influence the coding performance.
- Zeitschriftenartikel10th Workshop on Parallel Systems and Algorithms PASA 2012(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012)
- ZeitschriftenartikelPARS-Mitteilungen(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012)Gesellschaft für Informatik e.V., Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware
- ZeitschriftenartikelPreliminary Call For Papers(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012)25. PARS - Workshop
- ZeitschriftenartikelAchieving scalability for job centric monitoring in a distributed infrastructure(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Hilbrich, Marcus; Müller-Pfefferkorn, RalphJob centric monitoring allows to observe jobs on remote computing resources. It may offer visualisation of recorded monitoring data and helps to find faulty or misbehaving jobs. If installations like grids or clouds are observed monitoring data of many thousands of jobs have to be handled. The challenge of job centric monitoring infrastructures is to store, search and access data collected in huge installations like grids or clouds. We take this challenge with a distributed layer based architecture which provides a uniform view to all monitoring data. The concept of this infrastructure called SLAte and an analysis of the scalability is provided in this paper.
- ZeitschriftenartikelI/O-efficient approximation of graph diameters by parallel cluster growing — a first experimental study(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Ajwani, Deepak; Beckmann, Andreas; Meyer, Ulrich; Veith, DavidA fundamental step in the analysis of a massive graph is to compute its diameter. In the RAM model, the diameter of a connected undirected unweighted graph can be efficiently 2-approximated using a Breadth-First Search (BFS) traversal from an arbitrary node. However, if the graph is stored on disk, even an external memory BFS traversal is prohibitive, owing to the large number of I/Os it incurs. Meyer [Mey08] proposed a parametrized algorithm to compute an approximation of graph diameter with fewer I/Os than that required for exact BFS traversal of the graph. The approach is based on growing clusters around randomly chosen vertices `in parallel' until their fringes meet. We present an implementation of this algorithm and compare it with some simple heuristics and external-memory BFS in order to determine the trade-off between the approximation ratio and running-time achievable in practice. Our experiments show that with carefully chosen parameters, the new approach is indeed capable to produce surprisingly good diameter approximations in shorter time. We also confirm experimentally, that there are graph-classes where the parametrized approach runs into bad approximation ratios just as the theoretical analysis in [Mey08] suggests.