Auflistung nach Autor:in "Fey, Dietmar"
1 - 10 von 60
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAnt colony optimization for dynamic traveling salesman problems(ARCS 2004 – Organic and pervasive computing, 2004) Silva, Carlos A.; Runkler, Thomas A.This paper addresses the optimization of a dynamic Traveling Salesman Problem using the Ant Colony Optimization algorithm. Ants are social insects with limited skills that live in colonies able to solve complex problems. The intelligence of the global society arises from self organization mechanisms, based on the indirect communication between individuals through pheromones. The routing problem here presented is a typical case that requires a self organization type of algorithm, in order to cope with the problem dynamics. The simulation results show how the ant colony optimization is able to solve the different possible routing cases.
- KonferenzbeitragAn approach to molecular electronics by self organization of molecular units(ARCS 2004 – Organic and pervasive computing, 2004) Fritzsche, Wolfgang
- Editiertes Buch
- KonferenzbeitragAufbau- und Strukturkonzepte einer adaptiven multigranularenrekonfigurierbaren Hardwarearchitektur(ARCS 2004 – Organic and pervasive computing, 2004) Thomas, Alexander; Becker, JürgenModerne Anwendungsszenarien aus den Bereichen der Multimediaanwendungen und Mobilkommunikation verlangen nach immer leistungsfähigeren Datenverarbeitungsarchitekturen mit immenser Rechenleistung, die durch aktuelle Ansätze wie Mikroprozessoren und DSPs nicht ohne weiteres erreichbar sind. Dieser Beitrag beschreibt ein neues Architekturkonzept aus dem Bereich der rekonfigurierbaren Architekturen [Kr01][Be01][XP02] [XI01][AL01][TR02][AT01][QU01][Go01][Ba01][Zh01], die im Rahmen des DFG Schwerpunktprogramms 1148 „Rekonfigurierbare Rechensysteme“ entwickelt werden und auf die Erforschung und Weiterentwicklung der existierenden array-basierten Ansätze gerichtet ist. Die gewonnenen Erfahrungen und Ergebnisse sollen in eine neue adaptive dynamisch rekonfigurierbare Architektur einfließen, die das Ziel dieses Projekts ist.
- KonferenzbeitragA comparison of parallel programming models of network processors(ARCS 2004 – Organic and pervasive computing, 2004) Albrecht, Carsten; Hagenau, Rainer; Maehle, Erik; Döring, Andreas; Herkersdorf, AndreasToday's network processor utilize parallel processing in order to cope with the traffic growth and wire-speed of current and future network technologies. In this paper, we study two important parallel programming models for network processors: run to completion and pipelining. In particular, the packet flow of a standard network application, IPv4 Forwarding, through two examined network processors, IBM PowerNP NP4GS3 and Intel IXP1200, is reviewed and characterized in respect to their programming models. Based on a benchmark for PC-cluster SANs, their application throughput and latency for Gigabit Ethernet is investigated and compared to a commercial, ASIC-based switch. It is shown that in this scenario network processors can compete with hard-wired solutions.
- KonferenzbeitragComplementary circuits for on-line detection for 1-out-of-3 codes(ARCS 2004 – Organic and pervasive computing, 2004) Morozov, A.; Saposhnikov, V. V.; Saposhnikow, Vl. V.; Gössel, M.In this paper a new method for concurrent checking of an arbitrarily given combinational multi-output circuit f by use of an 1-out-of-3 code is presented. The circuit f is completed by an additional multi-output circuit g in such a way that the XOR-sums of the corresponding outputs of f or of the inverted outputs of f and g are elements of a 1-out-of-3 code. For efficiently checking the 1-out-of-3 encoded outputs also a new self-checking checker based on a three-rail checker cell is proposed. Index Terms: 1-out-of-3 code, on-line error detection, complementary circuit, totally self-checking 1-out-of-3 code checker.
- KonferenzbeitragDeriving dependability measures of measurements recorded in a matrix(ARCS 2004 – Organic and pervasive computing, 2004) Tschäche, OliverDependability benchmarking is meant to measure system characteristics like availability, reliablity, data integrity etc. Todays systems are working at high levels of these characteristics. Evaluation of these characteristics demands to inject faults forcing fault tolerant mechanisms to exercise their tasks. Observing the response of the system leads to measurements assessing the quality of these mechanisms. Our paper's focus is not on how to create a special dependability benchmark but on how to deduce significant dependability characteristics out of fault injection based measurements. We disclose which information we need for a general dependability benchmark, from whom they should be supplied and, finally, how to derive assessments of dependability metrics from this information. Our method is universally applicable to all fault injection based dependability benchmarking methods. Using one method for the presentation of dependability has several advantages: E.g. benchmarks become comparable to each other, benchmarkers faster learn how to interpret similarly looking results.
- KonferenzbeitragDIGORGAO - A digital problem solution concept(ARCS 2004 – Organic and pervasive computing, 2004) Kiesewetter, HelmutA concept for the solution of problems at the digital processor has been developed, which is based on a four level approach of attacking a problem. It concerns the abstract, concrete, active and heuristic level of problem solution. Dialectic pairs of notions have to be defined, in order to specify separation and connection relations at each level and from level to level. The precise reference structures of DIGORGAO support the potential for self organization. A digital organ virtuscope has been implemented at the PC according to the DIGORGAO concept. It offers full functionality for the solution of computer vision problems.
- KonferenzbeitragDistributed and parallel data mining on the grid(ARCS 2004 – Organic and pervasive computing, 2004) Li, Tianchao; Bollinger, ToniThis paper presents the initial design and implementation of a Gridbased distributed and parallel data mining system. The Grid system, namely the Business Intelligence Grid or BIGrid, is based on heterogeneous Grid server configurations and service-oriented Grid architecture. The system follows a layered design, whose infrastructure is divided into three tiers in general - Grid tier, a service tier and a client/portal tier. Issues of design and implementation, including brokering, task scheduling, adaptive mining script preparation and parallelization are discussed. The design and implementation of BIGrid help identify the specific requirements of applying Grid-based data mining in business realm, thus pave way for future design and implementation of a real generic Gridbased data mining system.
- KonferenzbeitragA distributed SAT solver for microcontroller(ARCS 2004 – Organic and pervasive computing, 2004) Schubert, Tobias; Becker, BerndIn this paper we present a parallel prover for the propositional satisfiability problem called PICHAFF. The algorithm is an adaption of the state-of-the-art solver CHAFF optimised for our scalable, dynamically reconfigurable multiprocessor system based on Microchip PIC microcontroller. Like usually in modern SAT solvers it includes lazy clause evaluation, conflict-driven learning, non-chronological backtracking, and clause deletion. A simple but efficient technique called Dynamic Search Space Partitioning is used for dividing the search space into disjoint portions to be treated in parallel by up to 9 processors. Besides explaining of how such a complex algorithm could be implemented on simple microcontroller we also give experimental results demonstrating the potential of the implemented methods.