Logo des Repositoriums
 

Softwaretechnik-Trends 37(3) - 2017

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 16
  • Zeitschriftenartikel
    RadarGun: Toward a Performance Testing Framework
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Henning, Sören; Wulf, Christian; Hasselbring, Wilhelm
    We present requirements on a performance testing framework to distinguish it from a functional testing framework and a benchmarking framework. Based on these requirements, we propose such a performance testing framework for Java, called RadarGun. RadarGun can be included into a continuous integration server, such as Jenkins, so that performance tests are executed automatically during the build process. We conducted a feasibility evaluation of this approach by applying it to the continuous integration infrastructure of the Pipe-and-Filter framework TeeTime.
  • Zeitschriftenartikel
    Refactoring Kieker’s I/O Infrastructure to Improve Scalability and Extensibility
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Knoche, Holger
    Kieker supports several technologies for transferring monitoring records, including highly scalable messaging solutions. However, Kieker’s current I/O infrastructure is primarily built for point-to-point connections, making it difficult to leverage the scalability of these solutions. In this paper, we report on how we refactored Kieker’s I/O infrastructure to make better use of scalable messaging, improving extensibility along the way.
  • Zeitschriftenartikel
    8th Symposium on Software Performance (SSP) - Karlsruhe, November 09–10, 2017
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Reussner, Ralf; Hasselbring, Wilhelm; Becker, Steffen
  • Zeitschriftenartikel
    Towards Extracting Realistic User Behavior Models
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Jung, Reiner; Adolf, Marc; Dornieden, Christoph
    Workloads can be characterized by intensity and user behavior. Combining multiple intensities and behaviors can be used to create workload profiles to evaluate software design and support the prediction of system utilization. The central challenge for workload profiles is their fit to real workloads and in particular the match to specific behaviors. This is especially relevant for understanding and identifying specific user groups and support workload composition by operators. In this paper, we address the identification of such realistic user behaviors utilizing domain specific attributes, evaluate the fitness of potential behavior clustering approaches, and discuss our setup to evaluate further clustering approaches.
  • Zeitschriftenartikel
    Vulnerability Recognition by Execution Trace Differentiation
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Viertel, Fabien Patrick; Karras, Oliver; Schneider, Kurt
    In context of security, one of the major problems for software development is the difficult and timeconsuming task to find and fix known vulnerabilities through the vulnerability documentation resulting out of a penetration test. This documentation contains for example the location and description of found vulnerabilities. To be able to find and fix a vulnerability, developers have to check this documentation. We developed a tool-based semi-automated analysis approach to locate and fix security issues by recorded execution traces. For identifying the affected source code snippets in the project code, we determine the difference between a regular and a malicious execution trace. This difference is an indicator for a potential vulnerability. As case study for this analysis we use vulnerabilities, which enable remote code execution. We implemented this approach in a software prototype named FOCUS+. This tool visualizes the traces and differences by several views such as a method call graph view. All views facilitate direct access to affected code snippets and point to the possible vulnerabilities. Thus, identified security gaps can immediately be fixed in FOCUS+.
  • Zeitschriftenartikel
    The Raspberry Pi: A Platform for Replicable Performance Benchmarks?
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Knoche, Holger; Eichelberger, Holger
    Replicating results of performance benchmarks can be difficult. A common problem is that researchers often do not have access to identical hardware and software setups. Modern single-board computers like the Raspberry Pi are standardized, cheap, and powerful enough to run many benchmarks, although probably not at the same performance level as desktop or server hardware. In this paper, we use the MooBench micro-benchmark to investigate to what extent Raspberry Pi is suited as a platform for replicable performance benchmarks. We report on our approach to set up and run the experiments as well as the experience that we made.
  • Zeitschriftenartikel
    Providing Model-Extraction-as-a-Service for Architectural Performance Models
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Walter, Jürgen; Eismann, Simon; Reed, Nikolai; Kounev,Samuel
    Architectural performance models can be leveraged to explore performance properties of software systems during design-time and run-time. We see a reluctance from industry to adopt model-based analysis approaches due to the required expertise and modeling effort. Building models from scratch in an editor does not scale for medium and large scale systems in an industrial context. Existing open-source performance model extraction approaches imply significant initial efforts which might be challenging for layman users. To simplify usage, we provide the extraction of architectural performance models based on application monitoring traces as a web service. Model-Extraction-as-a-Service (MEaaS) solves the usability problem and lowers the initial effort of applying model-based analysis approaches.
  • Zeitschriftenartikel
    Softwaretechnik-Trends Band 37, Heft 3
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) GI-FB Softwaretechnik
  • Zeitschriftenartikel
    Lean Testing
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Spillner, Andreas
    Es ist das Ziel eines jeden Softwareentwicklers, Programme mit möglichst wenigen Fehlern zu schreiben. Wie man weiß, ist das weiter gehende Ziel einer fehlerfreien Software nicht zu erreichen, von sehr kleinen Programmen abgesehen. Aber: Wie prüfe ich mein Programm(teil) auf Fehler und wie groß darf ein vertretbarer Testaufwand sein? Dieser Beitrag versucht, anhand eines einfachen Beispiels zu zeigen, was »Lean Testing« ist und was es leisten kann.
  • Zeitschriftenartikel
    Neuer Arbeitskreis Microservices und DevOps in der Fachgruppe Architekturen
    (Softwaretechnik-Trends Band 37, Heft 3, 2017) Hasselbring, Wilhelm
    Am 31. Mai 2017 fand in Hamburg das Grundungstreffen des neuen Arbeitskreises ”Microservices und DevOps“ in der Fachgruppe Architekturen mit gut zwanzig Teilnehmern statt. Gastgeber war die adesso AG, Niederlassung Hamburg.