Logo des Repositoriums

P041 - ARCS 2004 - Organic and Pervasive Computing

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 45
  • Konferenzbeitrag
    A framework for dependability evaluation of mechatronic units
    (ARCS 2004 – Organic and pervasive computing, 2004) Kochs, Hans-Dieter; Petersen, Jörg
    Mechatronic units are characterized by a complex interaction of functions from mechanics, electronics, communication and computer systems. These different fields of technology as well as influences from the operating environment must be reflected in the dependability consideration, for which so far no comprehensive framework exists. In this contribution, a framework for consideration of dependability of mechatronic units starting from a definition of the term 'dependability of mechatronic units' is proposed. Special attention is put on to the influence factors including human-machine interfaces, and real-world constraints, which thoroughly have to be identified and considered. Following questions have to be regarded: What is understood by the term dependability of mechatronic units? Which influencing factors have to be considered? How the dependability is assessed? This contribution also wants to initiate a scientific discussion outside the fault tolerance community.
  • Konferenzbeitrag
    Utilizing fault tolerance for achieving QoS in Ad-hoc networks
    (ARCS 2004 – Organic and pervasive computing, 2004) Trikaliotis, Spiro
    Ad-hoc networks consist of many independent wireless nodes which communicate without the help of an infrastructure. Since this type of networks exhibits a dynamic topology, that is, the nodes move very frequently, it is hard to establish some quality-of-service (QoS) in this scenario. This paper presents an approach which accomplishes this. QoS is guaranteed in the sense that the network either can offer these parameters, or it informs the source that it cannot do this anymore. While this guarantee is not so stringent that the network really can guarantee it in every case, in most cases, a QoS degradation as well as a link break will be announced to the sender beforehand, allowing it and the application to take some counter-measures such as putting the system in a safe state before the QoS degrades.
  • Konferenzbeitrag
    Complementary circuits for on-line detection for 1-out-of-3 codes
    (ARCS 2004 – Organic and pervasive computing, 2004) Morozov, A.; Saposhnikov, V. V.; Saposhnikow, Vl. V.; Gössel, M.
    In this paper a new method for concurrent checking of an arbitrarily given combinational multi-output circuit f by use of an 1-out-of-3 code is presented. The circuit f is completed by an additional multi-output circuit g in such a way that the XOR-sums of the corresponding outputs of f or of the inverted outputs of f and g are elements of a 1-out-of-3 code. For efficiently checking the 1-out-of-3 encoded outputs also a new self-checking checker based on a three-rail checker cell is proposed. Index Terms: 1-out-of-3 code, on-line error detection, complementary circuit, totally self-checking 1-out-of-3 code checker.
  • Konferenzbeitrag
    Self-checking carry-select adder with sum-bit duplication
    (ARCS 2004 – Organic and pervasive computing, 2004) Sogomonyan, E. S.; Marienfeld, D.; Ocheretnij, V.; Gössel, M.
    In this paper the first code-disjoint totally self-checking carry-select adder is proposed. The adder blocks are fast ripple adders with a single NAND-gate delay for carry-propagation per cell. In every adder block both the sum-bits and the corresponding inverted sum-bits are simultaneously implemented. The parity of the input operands is checked against the X OR-sum of the propagate signals. For 64 bits area and maximal delay are determined by the SYNOPSYS CAD tool of the EUROCHIP project. Compared to a 64 bit carry-select adder without error detection the delay of the most significant sum-bit does not increase. Compared to a completely duplicated code-disjoint carry-select adder we save 240 X OR-gates.
  • Konferenzbeitrag
    A flexible slotting scheme for TDMA-based protocols
    (ARCS 2004 – Organic and pervasive computing, 2004) Lisner, Jens C.
    Popular fault-tolerant1 TDMA-based protocols like TT-CAN ([Fu00] and [Mu02]), TTP ([KG93]) and FlexRay ([Fl02]) are using a static slotting scheme for bus arbitration. This paper describes a possible solution for a more flexible handling of the slotting, by providing extra time of variable length in the cycle for additional slots. These extra time is only provided if it is requested by the controllers in the network. Another possibility is to configure extra slots statically (to come to a fixed cycle length) and assign the slots dynamically to different controllers. A new node architecture is introduced for improving protection of the bus in case of a faulty controller. An agreement algorithm for determining a new schedule every cycle in the distributed network is presented. The system is capable of tolerating double-faults (one controller and one channel).
  • Editiertes Buch
  • Konferenzbeitrag
    On the cache access behavior of openMP applications
    (ARCS 2004 – Organic and pervasive computing, 2004) Tao, Jie; Karl, Wolfgang
    The widening gap between memory and processor speed results in increasing requirements to improve the cache utility. This issue is especially critical for OpenMP execution which usually explores fine-grained parallelism. The work presented in this paper studies the cache behavior of OpenMP applications in order to detect potential optimizations with respect to cache locality. This study is based on a simulation environment that models the parallel execution of OpenMP programs and provides comprehensive information about the runtime data accesses. This performance data enables a detailed analysis and an easy understanding of the cache operations performed on-line during the execution.
  • Konferenzbeitrag
    A comparison of parallel programming models of network processors
    (ARCS 2004 – Organic and pervasive computing, 2004) Albrecht, Carsten; Hagenau, Rainer; Maehle, Erik; Döring, Andreas; Herkersdorf, Andreas
    Today's network processor utilize parallel processing in order to cope with the traffic growth and wire-speed of current and future network technologies. In this paper, we study two important parallel programming models for network processors: run to completion and pipelining. In particular, the packet flow of a standard network application, IPv4 Forwarding, through two examined network processors, IBM PowerNP NP4GS3 and Intel IXP1200, is reviewed and characterized in respect to their programming models. Based on a benchmark for PC-cluster SANs, their application throughput and latency for Gigabit Ethernet is investigated and compared to a commercial, ASIC-based switch. It is shown that in this scenario network processors can compete with hard-wired solutions.
  • Konferenzbeitrag
    Modelling of parameters in supercomputer workloads
    (ARCS 2004 – Organic and pervasive computing, 2004) Song, Baiyi; Ernemann, Carsten; Yahyapour, Ramin
    Evaluation methods for parallel computers often require the availability of relevant workload information. To this end, workload traces recorded on real installations are frequently used. Alternatively, workload models are applied. However, often not all necessary information are available for a specific workload. In this paper, a model is presented to recover an estimated job execution time when this information is not available. The quality of the modelled estimated runtime is evaluated by comparing different workload traces for which this information is available.
  • Konferenzbeitrag
    Distributed and parallel data mining on the grid
    (ARCS 2004 – Organic and pervasive computing, 2004) Li, Tianchao; Bollinger, Toni
    This paper presents the initial design and implementation of a Gridbased distributed and parallel data mining system. The Grid system, namely the Business Intelligence Grid or BIGrid, is based on heterogeneous Grid server configurations and service-oriented Grid architecture. The system follows a layered design, whose infrastructure is divided into three tiers in general - Grid tier, a service tier and a client/portal tier. Issues of design and implementation, including brokering, task scheduling, adaptive mining script preparation and parallelization are discussed. The design and implementation of BIGrid help identify the specific requirements of applying Grid-based data mining in business realm, thus pave way for future design and implementation of a real generic Gridbased data mining system.