P281 - Sicherheit 2018 - Sicherheit, Schutz und Zuverlässigkeit
Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- Editiertes BuchSicherheit 2018 - Sicherheit, Schutz und Zuverlässigkeit(SICHERHEIT 2018, 2018)
- KonferenzbeitragImproving Anonymization Clustering(SICHERHEIT 2018, 2018) Thaeter, Florian; Reischuk, RüdigerMicroaggregation is a technique to preserve privacy when confidential information about individuals shall be used by third parties. A basic property to be established is called k-anonymity. It requires that identifying information about individuals should not be unique, instead there has to be a group of size at least k that looks identical. This is achieved by clustering individuals into appropriate groups and then averaging the identifying information. The question arises how to select these groups such that the information loss by averaging is minimal. This problem has been shown to be NP-hard. Thus, several heuristics called MDAV, V-MDAV,... have been proposed for finding at least a suboptimal clustering. This paper proposes a more sophisticated, but still efficient strategy called MDAV* to construct a good clustering. The question whether to extend a group locally by individuals close by or to start a new group with such individuals is investigated in more depth. This way, a noticeable lower information loss can be achieved which is shown by applying MDAV* to several established benchmarks of real data and also to specifically designed random data.
- KonferenzbeitragHashing of personally identifiable information is not sufficient(SICHERHEIT 2018, 2018) Marx, Matthias; Zimmer, Ephraim; Mueller, Tobias; Blochberger, Maximilian; Federrath, HannesIt is common practice of web tracking services to hash personally identifiable information (PII), e. g., e-mail or IP addresses, in order to avoid linkability between collected data sets of web tracking services and the corresponding users while still preserving the ability to update and merge data sets associated to the very same user over time. Consequently, these services argue to be complying with existing privacy laws as the data sets allegedly have been pseudonymised. In this paper, we show that the finite pre-image space of PII is bounded in such a way, that an attack on these hashes is significantly eased both theoretically as well as in practice. As a result, the inference from PII hashes to the corresponding PII is intrinsically faster than by performing a naive brute-force attack. We support this statement by an empirical study of breaking PII hashes in order to show that hashing of PII is not a sufficient pseudonymisation technique.
- KonferenzbeitragVerbesserung der Syndrome-Trellis-Kodierung zur Erhöhung der Unvorhersagbarkeit von Einbettpositionen in steganographischen Systemen(SICHERHEIT 2018, 2018) Köhler, Olaf Markus; Pasquini, Cecilia; Böhme, RainerBeim Einbetten einer versteckten Nachricht in ein Trägermedium wählen adaptive steganographische Systeme die Einbettpositionen abhängig von der erwarteten Auffälligkeit der Änderungen. Die optimale Auswahl kann statistisch modelliert werden. Wir präsentieren Ergebnisse einer Reihe von Experimenten, in denen untersucht wird, inwiefern die Auswahl durch Syndrome-Trellis-Kodierung dem Modell unabhängiger Bernoulli-verteilter Zufallsvariablen entspricht. Wir beobachten im Allgemeinen kleine Näherungsfehler sowie Ausreißer an Randpositionen. Bivariate Abhängigkeiten zwischen Einbettpositionen ermöglichen zudem Rückschlüsse auf den verwendeten Kode und seine Parameter. In Anwendungen, welche die Ausreißer nicht mithilfe zufälliger Permutationen verstecken können, kann die hier vorgeschlagene „outlier corrected“-Variante verwendet werden um die steganographische Sicherheit zu verbessern. Die aggregierten bivariaten Statistiken sind dahingegen invariant unter Permutationen und stellen, unter der Annahme mächtiger Angreifer, ein bisher nicht erforschtes Sicherheitsrisiko dar.
- KonferenzbeitragIs MathML dangerous?(SICHERHEIT 2018, 2018) Späth, ChristopherHTML5 forms the basis for modern web development and merges different standards. One of these standards is MathML. It is used to express and display mathematical statements. However, with more standards being natively integrated into HTML5 the processing model gets inherently more complex. In this paper, we evaluate the security risks of MathML. We created a semi-automatic test suite and studied the JavaScript code execution and the XML processing in MathML. We added also the Content-Type handling of major browsers to the picture. We discovered a novel way to manipulate the browser’s status line without JavaScript and found two novel ways to execute JavaScript code, which allowed us to bypass several sanitizers. The fact, that JavaScript code embedded in MathML can access session cookies worsens matters even more.
- KonferenzbeitragSDN Ro2tkits: A Case Study of Subverting A Closed Source SDN Controller(SICHERHEIT 2018, 2018) Röpke, ChristianAn SDN controller is a core component of the SDN architecture. It is responsible for managing an underlying network while allowing SDN applications to program it as required. Because of this central role, compromising such an SDN controller is of high interest for an attacker. A recently published SDN rootkit has demonstrated, for example, that a malicious SDN application is able to manipulate an entire network while hiding corresponding malicious actions. However, the facts that this attack targeted an open source SDN controller and applied a specific way to subvert this system leaves important questions unanswered: How easy is it to attack closed source SDN controllers in the same way? Can we concentrate on the already presented technique or do we need to consider other attack vectors as well to protect SDN controllers? In this paper, we elaborate on these research questions and present two new SDN rootkits, both targeting a closed source SDN controller. Similar to previous work, the first one is based on Java reflection. In contrast to known reflection abuses, however, we must develop new techniques as the existing ones can only be adopted in parts. Additionally, we demonstrate by a second SDN rootkit that an attacker is by no means limited to reflection-based attacks. In particular, we abuse aspect-oriented programming capabilities to manipulate core functions of the targeted system. To tackle the security issues raised in this case study, we discuss several countermeasures and give concrete suggestions to improve SDN controller security.
- KonferenzbeitragSource Code Patterns of Buffer Overflow Vulnerabilities in Firefox(SICHERHEIT 2018, 2018) Schuckert, Felix; Hildner, Max; Katt, Basel; Langweg, HannoWe investigated 50 randomly selected buffer overflow vulnerabilities in Firefox. The source code of these vulnerabilities and the corresponding patches were manually reviewed and patterns were identified. Our main contribution are taxonomies of errors, sinks and fixes seen from a developer's point of view. The results are compared to the CWE taxonomy with an emphasis on vulnerability details. Additionally, some ideas are presented on how the taxonomy could be used to improve the software security education.
- KonferenzbeitragTowards a Differential Privacy Theory for Edge-Labeled Directed Graphs(SICHERHEIT 2018, 2018) Reuben, JenniIncreasingly, more and more information is represented as graphs such as social network data, financial transactions and semantic assertions in Semantic Web context. Mining such data about people for useful insights has enormous social and commercial benefits. However, the privacy of the individuals in datasets is a major concern. Hence, the challenge is to enable analyses over a dataset while preserving the privacy of the individuals in the dataset. Differential privacy is a privacy model that offers a rigorous definition of privacy, which says that from the released results of an analysis it is '\emph{difficult}' to determine if an individual contributes to the results or not. The differential privacy model is extensively studied in the context of relational databases. Nevertheless, there has been growing interest in the adaptation of differential privacy to graph data. Previous research in applying differential privacy model to graphs focuses on unlabeled graphs. However, in many applications graphs consist of labeled edges, and the analyses can be more expressive, which now takes into account the labels. Thus, it would be of interest to study the adaptation of differential privacy to edge-labeled directed graphs. In this paper, we present our foundational work towards that aim. First we present three variant notions of an individual's information being/not being in the analyzed graph, which is the basis for formalizing the differential privacy guarantee. Next, we present our plan to study particular graph statistics using the differential privacy model, given the choice of the notion that represent the individual's information being/not being in the analyzed graph.
- KonferenzbeitragMy Data is Mine - Users' Handling of Personal Data in Everyday Life(SICHERHEIT 2018, 2018) Bock, SvenAbstract: This experimental study is about investigating users’ handling of personal data and their awareness of data collection. A deception experiment was designed to let the subjects believe that they are participating in a decision-making experiment. Only after the experiment, they were informed about the actual aim of examining their behaviour towards their personal data. Before the deception experiment either a printed or a digital version of the terms and conditions was handed out. The reading time and the willingness to accept the terms and conditions was measured in order to find significant differences. For the deception, a program was implemented which simultaneously presents two terms including sensitive data like religious and political orientation. The subject should choose the favoured term. Afterwards, subjects were asked whether and to what extent they agree to hand out their collected data to third parties in exchange for financial gain. After the experiment the participants were asked about their usual behaviour regarding their personal data.
- KonferenzbeitragBounded Privacy: Formalising the Trade-Off Between Privacy and Quality of Service(SICHERHEIT 2018, 2018) Hartmann, LukasMany services and applications require users to provide a certain amount of information about themselves in order to receive an acceptable quality of service (QoS). Exemplary areas of use are location based services like route planning or the reporting of security incidents for critical infrastructure. Users putting emphasis on their privacy, for example through anonymization, therefore usually suffer from a loss of QoS. Some services however, may not even be feasible above a certain threshold of anonymization, resulting in unacceptable service quality. Hence, there need to be restrictions on the applied level of anonymization. To prevent the QoS from dropping below an unacceptable threshold, we introduce the concept of Bounded Privacy, a generic model to describe situations in which the achievable level of privacy is bounded by its relation to the service quality. We furthermore propose an approach to derive the optimal level of privacy for both discrete and continuous data.
- «
- 1 (current)
- 2
- 3
- »