Auflistung nach Schlagwort "Shared Memory"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelEfficient Synchronization Techniques in a Decentralized Memory Management System Enabling Shared Memory(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011) Mattes, Oliver; Schindewolf, Martin; Sedler, Roland; Buchty, Rainer; Karl, WolfgangThe rising integration level enables combining more logic on a single chip. This is exploited in multiprocessor systems-on-chip (MPSoCs) or manycore research prototypes such as the Intel SCC. These platforms offer access to shared memory over a limited number of controllers which may lead to congestion. In order to scale the memory with the core count, the memory management must become more flexible and distributed in nature. In the near future decentralized systems with multiple selfmanaging memory components will arise. The problem tackled in this paper is how to realize synchronization mechanisms for coincident access to shared memory in such a decentralized memory management system. Furthermore, improvements of the distributed synchronization mechanism are integrated and evaluated. To speed up the synchronization, additional logic in the form of a locks queue, is added. In order to reduce the network traffic this is combined by extending the synchronization protocol with exponential backoff. In the evaluation, side effects of combining both techniques are discussed and explained.
- ZeitschriftenartikelA Hybrid Parallelization Approach for Cloud-enabled Metabolic Flux Analysis Simulation Workflows(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011) Dalman, Tolga; Weitzel, Michael; Freisleben, Bernd; Wiechert, Wolfgang; Nöh, Katharina
- TextdokumentReProVide: Towards Utilizing Heterogeneous Partially Reconfigurable Architectures for Near-Memory Data Processing(BTW 2019 – Workshopband, 2019) Becher, Andreas; Herrmann, Achim; Wildermann, Stefan; Teich, JürgenReconfigurable hardware such as Field-programmable Gate Arrays (FPGAs) is widely used for data processing in databases. Most of the related work focuses on accelerating one or a small set of specific operations like sort, join, regular expression matching. A drawback of such approaches is often the assumed static accelerator hardware architecture: Rather than adapting the hardware to fit the query, the query plan has to be adapted to fit the hardware. Moreover, operators or data types that are not supported by the accelerator have to be processed in software. As a remedy, approaches for exploiting the dynamic partial reconfigurability of FPGAs have been proposed that are able to adapt the datapath at runtime. However, on modern FPGAs, this introduces new challenges due to the heterogeneity of the available resources. In addition, not only the execution resources may be heterogeneous but also the memory resources. This work focuses on the architectural aspects of database (co-)processing on heterogeneous FPGA-based PSoC (programmable System-on-Chip) architectures including processors, specialized hardware components, multiple memory types and dynamically partially reconfigurable areas. We present an approach to support such (co-)processing called ReProVide. In particular, we introduce a model to formalize the challenging task of operator placement and buffer allocation onto such heterogeneous hardware and describe the difficulties of finding good placements. Furthermore, a detailed insight into different memory types and their peculiarities is given in order to use the strength of heterogeneous memory architectures. Here, we also highlight the implications of heterogeneous memories for the problem of query placement.
- ZeitschriftenartikelSelf-organized Invasive Parallel Optimization with Self-repairing Mechanism(PARS: Parallel-Algorithmen, -Rechnerstrukturen und -Systemsoftware: Vol. 28, No. 1, 2011) Mostaghim, Sanaz; Pfeiffer, Friederike; Schmeck, HartmutThe parallelization of optimization algorithms is very beneficial when the function evaluations of optimization problems are time consuming. However, parallelization gets very complicated when we deal with a large number of parallel resources. In this paper, we present a framework called Self-organized Invasive Parallel Optimization (SIPO) in which the resources are self-organized. The optimization starts with a small number of resources which decide the number of further required resources on-demand. This means that more resources are stepwise added or eventually released from the platform. In this paper, we study an undesired effect in such a self-organized system and propose a self-repairing mechanism called Recovering-SIPO. These frameworks are tested on a series of multi-objective optimization problems.