PARS-Mitteilungen
Die Fachgruppe Parallel -Algorithmen, -Rechnerstrukturen und -Systemsoftware (PARS) ist eine gemeinsame Fachgruppe der Gesellschaft für Informatik e.V. (GI) und der Informationstechnischen Gesellschaft (ITG). PARS befasst sich mit allen Formen der Parallelverarbeitung, insbesondere den Wechselbeziehungen zwischen Hardware- und Softwarearchitekturen von parallelen Systemen.
In diesem Bereich finden sich die jährlichen Mitteilungen der Fachgruppe - ebenso zu finden unter https://fg-pars.gi.de/publikationen/pars-mitteilungen. Die PARS-Mitteilung erscheinen in der Regel einmal pro Jahr. Sie befassen sich mit allen Aspekten paralleler Algorithmen und deren Implementierung auf Rechenanlagen in Hard- und Software. Die Beiträge werden nicht redigiert, sie stellen die Meinung des Autors dar. Ihr Erscheinen in diesen Mitteilung bedeutet keine Einschränkung anderweitiger Publikation.
Offizielle bibliographische Bezeichnung bei Zitaten:
Mitteilungen - Gesellschaft für Informatik e. V.,
Parallel-Algorithmen und Rechnerstrukturen, ISSN 0177 - 0454
Auflistung PARS-Mitteilungen nach Titel
Treffer pro Seite
Sortieroptionen
- Zeitschriftenartikel1. Aktuelle und zukünftige Aktivitäten (Bericht des Sprechers)(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011)Die 28. Ausgabe der PARS-Mitteilungen enthält die Beiträge des 24. PARS-Workshops und des 4. Workshops Grid-Technologie, die die wesentlichen Aktivitäten der Fachgruppe im Jahr 2011 darstellen. Der 24. PARS-Workshop fand am 26. und 27. Mai 2011 in Rüschlikon im IBM Forschungslabor statt. Fast 40 Teilnehmer fanden sich ein. Die 14 Vorträge deckten ein umfangreiches Themenspektrum ab. Im Jahr 2012 wird wieder gemeinsam mit der neu-organisierten Fachgruppe ALGO (früher: PARVA) ein PASA-Workshop im Rahmen der ARCS-Konferenz in München stattfinden. Im Jahr 2013 wird mit dem PARS-Workshop ein doppeltes Jubiläum gefeiert: 30 Jahre Fachgruppe PARS und 25 PARSWorkshops. Hierzu kehrt der Workshop an den 'Geburtsort' Erlangen zurück. Den zum sechsten Mal ausgeschriebenen und mit 500 dotierten Nachwuchspreis erhält in diesem Jahr zum zweiten Mal (nach 2009) eine Frau: Victoria Caparros Cabezas (IBM Research Zürich). Sachpreise konnten Michael Andersch (TU Berlin) sowie Benjamin Milde und Niklas Büscher (TU Darmstadt) entgegennehmen. Das Bild zeigt die Preisträger zusammen mit dem Organisator des Workshops.
- Zeitschriftenartikel1. Aktuelle und zukünftige Aktivitäten (Bericht des Sprechers)(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012)
- Zeitschriftenartikel10th Workshop on Parallel Systems and Algorithms PASA 2012(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012)
- Zeitschriftenartikel24. PARS — Workshop(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011)
- ZeitschriftenartikelAn accelerated Cluster-Architecture for the Exascale(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011) Eicker, N.; Lippert, Th.Clusters are dominating high-performance computing (HPC) today. The success of this architecture is based on the fact that it profits from the improvements provided by main-stream computing well known under the label of Moore's Law. But trying to get to Exascale within this decade might require additional endeavors beyond surfing this technology wave. In order to find a possible direction we review Amdahl's and Gustafson's thoughts on scalability. Based on this analysis we propose an advance architecture combining a Cluster with a so called Booster element comprising of accelerators interconnected by a high-performance fabric. We argue that this architecture provides significant advantages compared to today's accelerated Clusters and might pave the way for Clusters into the era of Exascale computing.
- ZeitschriftenartikelAcceleration of Optical Flow Computations on Tightly-Coupled Processor Arrays(PARS-Mitteilungen: Vol. 30, Nr. 1, 2013) Sousa, Éricles Rodrigues; Tanase, Alexandru; Lari, Vahid; Hannig, Frank; Teich, Jürgen; Paul, Johny; Stechele, Walter; Kröhnert, Manfred; Asfour, TaminOptical flow is widely used in many applications of portable mobile de- vices and automotive embedded systems for the determination of motion of objects in a visual scene. Also in roboticsit is used for motion detection, object segmentation, time-to-contact information, focus of expansion calculations, robot navigation, and automatic parking for vehicles. Similar to many other image processing algorithms, optical flow processes pixel operations repeatedly over whole image frames. Thusit provides a high degree of fine-grained parallelism which can be efficiently exploited on massively parallel processor arrays. In this contextwe propose to accelerate the computation of complex motion estimation vectors on programmable tightly-coupled processor arrays, which offer a high flexibility enabled by coarse-grained reconfiguration capabilities. Novel is also that the degree of parallelism may be adapted to the number of processors that are available to the application. Finallywe present an implementation that is 18 times faster when compared to (a) an FPGA-based soft processor implementationand (b) may be adapted regarding different QoS requirements, hence, being more flexible than a dedicated hardware implementation.
- ZeitschriftenartikelAcceleration of Optical Flow Computations on Tightly-Coupled Processor Arrays(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 30, No. 1, 2013) Sousa, Éricles; Tanase, Alexandru; Lari, Vahid; Hannig, Frank; Teich, Jürgen; Paul, Johny; Stechele, Walter; Kröhnert, Manfred; Asfour, TaminOptical flow is widely used in many applications of portable mobile devices and automotive embedded systems for the determination of motion of objects in a visual scene. Also in robotics, it is used for motion detection, object segmentation, time-to-contact information, focus of expansion calculations, robot navigation, and automatic parking for vehicles. Similar to many other image processing algorithms, optical flow processes pixel operations repeatedly over whole image frames. Thus, it provides a high degree of fine-grained parallelism which can be efficiently exploited on massively parallel processor arrays. In this context, we propose to accelerate the computation of complex motion estimation vectors on programmable tightly-coupled processor arrays, which offer a high flexibility enabled by coarse-grained reconfiguration capabilities. Novel is also that the degree of parallelism may be adapted to the number of processors that are available to the application. Finally, we present an implementation that is 18 times faster when compared to (a) an FPGA-based soft processor implementation, and (b) may be adapted regarding different QoS requirements, hence, being more flexible than a dedicated hardware implementation.
- ZeitschriftenartikelAchieving scalability for job centric monitoring in a distributed infrastructure(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 29, No. 1, 2012) Hilbrich, Marcus; Müller-Pfefferkorn, RalphJob centric monitoring allows to observe jobs on remote computing resources. It may offer visualisation of recorded monitoring data and helps to find faulty or misbehaving jobs. If installations like grids or clouds are observed monitoring data of many thousands of jobs have to be handled. The challenge of job centric monitoring infrastructures is to store, search and access data collected in huge installations like grids or clouds. We take this challenge with a distributed layer based architecture which provides a uniform view to all monitoring data. The concept of this infrastructure called SLAte and an analysis of the scalability is provided in this paper.
- ZeitschriftenartikelAn Architecture Framework for Porting Applications to FPGAs(PARS-Mitteilungen: Vol. 31, Nr. 1, 2014) Nowak, Fabian; Bromberger, Michael; Karl, WolfgangHigh-level language converters help creating FPGAbased accelerators and allow to rapidly come up with a working prototype. But the generated state machines do often not perform as optimal as hand-designed control units, and they require much area. Also, the created deep pipelines are not very efficient for small amounts of data. Our approach is an architecture framework of hand-coded building blocks (BBs). A microprogrammable control unit allows programming the BBs to perform computations in a data-flow style. We accelerate applications further by executing independent tasks in parallel on different BBs. Our microprogram implementation for the Conjugate-Gradient method on our data-driven, microprogrammable, task-parallel architecture framework on the Convey HC-1 is competitive with a 24-thread Intel Westmere system. It is 1.2× faster using only one out of four available FPGAs, thereby proving its potential for accelerating numerical applications. Moreover, we show that hardware developers can change the BBs and thereby reduce iteration count of a numerical algorithm like the ConjugateGradient method to less than 0.5× due to more precise operations inside the BBs, speeding up execution time 2.47×.
- ZeitschriftenartikelARCS 2012(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011)25