Auflistung nach Schlagwort "reliability"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAnalyzing Code Corpora to Improve the Correctness and Reliability of Programs(Softwaretechnik-Trends Band 42, Heft 2, 2022) Patra, JibeshThe goal of the dissertation summarized here is to use program analysis and novel learning-based techniques to alleviate some of the challenges faced by developers while ensuring the correctness and reliability of programs. We focus on dynamically typed languages such as JavaScript and Python for their popularity and present six approaches that leverages analysis of code corpora in aiding to solve software engineering problems. We use static analysis to generate new programs, to seed bugs in programs, and to obtain data for training neural models. We present an effective technique called Generalized Tree Reduction algorithm (GTR), to reduce arbitrary test inputs that can be represented as a tree, such as program code, PDF files, and XML documents. The efficiency of input reduction is increased by learning transformations from a corpus of example data.
- KonferenzbeitragAnomalies in measuring speed and other dynamic properties with touchscreens and tablets(BIOSIG 2019 - Proceedings of the 18th International Conference of the Biometrics Special Interest Group, 2019) Griechisch, Erika; Ward, Jean Renard; Hanczár, GergelyTouchscreens and tablets are often used in different studies and applications to capture high-resolution drawing, handwriting, or signatures. Several studies tend to analyse different properties, such as peaks or changes of the time derivatives of the coordinates; like velocity, angular velocity, acceleration or jerk of the movements. These are substantial features to analyse drawing, analyse or recognize handwriting, to examine the fluency of handwriting or verify signatures. The reliability of such a study strongly depends on the fidelity of the acquired data. We have tested several touchscreens and tablets which are widely used in different research studies, focusing on the resolution and accuracy of the coordinates and the uniformity of sampling. We have found that the vendors’ performance specifications (to the extent the vendor gives meaningful specifications) may seriously deviate from reality. Even if some of the raw data may look satisfactory at first sight, our examination uncovered several potentially significant bad behaviors, and instances in which the specifications from the vendors are, at best, misleading and incompletely informative. Some authors mention that the reliability of tablet data is unclear [Ha13, Fr05], but researchers may underestimate to what extent it could influence their results. This paper uncovers some aspects of the unreliability of the data and emphasizes the importance of understanding and addressing (or at least, knowing) the revealed problems prior to any analysis.
- ZeitschriftenartikelEvaluating Architectural Safeguards for Uncertain AI Black-Box Components(Softwaretechnik-Trends Band 44, Heft 2, 2024) Scheerer, MaxThere have been enormous achievements in the field of Artificial Intelligence (AI) which has attracted a lot of attention. Their unverifiable nature, however, makes them inherently unreliable. For example, there are various reports of incidents in which incorrect predictions of AI components led to serious system malfunctions (some even ended fatally). As a result, various architectural approaches (referred to as Architectural Safeguards) have been developed to deal with the unreliable and uncertain nature of AI. Software engineers are now facing the challenge to select the architectural safeguard that satisfies the non-functional requirements (e.g. reliability) best. However, it is crucial to resolve such design decisions as early as possible to avoid (i) changes after the system has been deployed (and thus potentially high costs) and to meet the rigorous quality requirements of safety-critical systems where AI is more commonly used. This dissertation presents a model-based approach that supports software engineers in the development of AI-enabled systems by enabling the evaluation of architectural safeguards. More specifically, an approach for reliability prediction of AI-enabled systems (based on established model-based techniques) is presented. Moreover, the approach is generalised to architectural safeguards with self-adaptive capabilities, i.e. self adaptive systems. The approach has been validated by considering four case studies. The results show that the approach not only makes it possible to analyse the impact of architectural safeguards on the overall reliability of an AI-enabled system, but also supports software engineers in their decision-making.
- ZeitschriftenartikelFault-tolerant data management in the gaston peer-to-peer file system(Wirtschaftsinformatik: Vol. 45, No. 3, 2003) Dynda, Vladimír; Rydlo, PavelGaston is a peer-to-peer large-scale file system designed to provide a fault-tolerant and highly available file service for a virtually unlimited number of users. Data management in Gaston disseminates and stores replicas of files on multiple machines to achieve the requested level of data availability and uses a dynamic tree-topology structure to connect replication schema members. We present generic algorithms for replication schema creation and maintenance according to file user requirements and autonomous constraints that are set on individual nodes. We also show specific data object structure as well as mechanisms for secure and efficient update propagation among replicas with data consistency control. Finally, we introduce a scalable and efficient technique improving fault-tolerance of the tree-topology structure connecting replicas.
- KonferenzbeitragTowards Testing the Performance Influence of Hypervisor Hypercall Interface Behavior(Softwaretechnik-Trends Band 39, Heft 4, 2019) Beierlieb, Lukas; Iffländer, Lukas; Kounev, Samuel; Milenkoski, AleksandarWith the continuing rise of cloud technology hypervisors play a vital role in the performance and reliability of current services. Hypervisors offer so-called hypercall interfaces for communication with the hosted virtual machines. These interfaces require thorough robustness to assure performance, security, and reliability. Existing research focusses on finding hypercall-related vulnerabilities. In this work, we discuss open challenges regarding hypercall interfaces. To address these challenges, we propose an extensive framework architecture to perform robustness testing on hypercall interfaces. This framework supports test campaigns and modeling of hypercall interfaces.