P281 - Sicherheit 2018 - Sicherheit, Schutz und Zuverlässigkeit
Auflistung P281 - Sicherheit 2018 - Sicherheit, Schutz und Zuverlässigkeit nach Erscheinungsdatum
1 - 10 von 27
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragImproving Anonymization Clustering(SICHERHEIT 2018, 2018) Thaeter, Florian; Reischuk, RüdigerMicroaggregation is a technique to preserve privacy when confidential information about individuals shall be used by third parties. A basic property to be established is called k-anonymity. It requires that identifying information about individuals should not be unique, instead there has to be a group of size at least k that looks identical. This is achieved by clustering individuals into appropriate groups and then averaging the identifying information. The question arises how to select these groups such that the information loss by averaging is minimal. This problem has been shown to be NP-hard. Thus, several heuristics called MDAV, V-MDAV,... have been proposed for finding at least a suboptimal clustering. This paper proposes a more sophisticated, but still efficient strategy called MDAV* to construct a good clustering. The question whether to extend a group locally by individuals close by or to start a new group with such individuals is investigated in more depth. This way, a noticeable lower information loss can be achieved which is shown by applying MDAV* to several established benchmarks of real data and also to specifically designed random data.
- KonferenzbeitragTowards a Differential Privacy Theory for Edge-Labeled Directed Graphs(SICHERHEIT 2018, 2018) Reuben, JenniIncreasingly, more and more information is represented as graphs such as social network data, financial transactions and semantic assertions in Semantic Web context. Mining such data about people for useful insights has enormous social and commercial benefits. However, the privacy of the individuals in datasets is a major concern. Hence, the challenge is to enable analyses over a dataset while preserving the privacy of the individuals in the dataset. Differential privacy is a privacy model that offers a rigorous definition of privacy, which says that from the released results of an analysis it is '\emph{difficult}' to determine if an individual contributes to the results or not. The differential privacy model is extensively studied in the context of relational databases. Nevertheless, there has been growing interest in the adaptation of differential privacy to graph data. Previous research in applying differential privacy model to graphs focuses on unlabeled graphs. However, in many applications graphs consist of labeled edges, and the analyses can be more expressive, which now takes into account the labels. Thus, it would be of interest to study the adaptation of differential privacy to edge-labeled directed graphs. In this paper, we present our foundational work towards that aim. First we present three variant notions of an individual's information being/not being in the analyzed graph, which is the basis for formalizing the differential privacy guarantee. Next, we present our plan to study particular graph statistics using the differential privacy model, given the choice of the notion that represent the individual's information being/not being in the analyzed graph.
- KonferenzbeitragTowards Forensic Exploitation of 3-D Lighting Environments in Practice(SICHERHEIT 2018, 2018) Seuffert, Julian; Stamminger, Marc; Riess, ChristianThe goal of image forensics is to determine authenticity and origin of a digital image or video without an embedded security scheme. Among the existing methods, the probably most well-known physics-based approach is to validate the distribution of incident light on objects of interest. Inconsistent lighting environments are considered as an indication of image splicing. However, one drawback of this approach is that it is quite challenging to use it in practice. In this work, we propose several practical improvements to this approach. First, we propose a new way of comparing lighting environments. Second, we present a factorization of the overall error into its individual contributions, which shows that the biggest error source are incorrect geometric fits. Third, we propose a confidence score that is trained from the results of an actual implementation. The confidence score allows to define an implementation- and problem-specific threshold for the consistency of two lighting environments.
- KonferenzbeitragMy Data is Mine - Users' Handling of Personal Data in Everyday Life(SICHERHEIT 2018, 2018) Bock, SvenAbstract: This experimental study is about investigating users’ handling of personal data and their awareness of data collection. A deception experiment was designed to let the subjects believe that they are participating in a decision-making experiment. Only after the experiment, they were informed about the actual aim of examining their behaviour towards their personal data. Before the deception experiment either a printed or a digital version of the terms and conditions was handed out. The reading time and the willingness to accept the terms and conditions was measured in order to find significant differences. For the deception, a program was implemented which simultaneously presents two terms including sensitive data like religious and political orientation. The subject should choose the favoured term. Afterwards, subjects were asked whether and to what extent they agree to hand out their collected data to third parties in exchange for financial gain. After the experiment the participants were asked about their usual behaviour regarding their personal data.
- KonferenzbeitragInformationssicherheitskonzept nach IT-Grundschutz für Containervirtualisierung in der Cloud(SICHERHEIT 2018, 2018) Buchmann, Erik; Hartmann, Andreas; Bauer, StephanieDas Bundesamt für Sicherheit in der Informationstechnik (BSI) stellt mit dem IT-Grundschutz eine sichere und wirksame Schutzvorkehrung vor den stetig steigenden Bedrohungen im Kontext der Digitalisierung zur Verfügung. Zwar sind die behandelten BSI-Bausteine herstellerneutral definiert. Gleichwohl beziehen sich die Bausteine auf die sich ändernden Technologien, was eine entsprechende Anpassung erforderlich macht. Mit dem Hintergrund von Cloud basierten IT-Infrastrukturen findet aktuell ein massiver Wandel hinsichtlich eingesetzter Servertechnologien und –dienste hin zu Containervirtualisierung in der Cloud statt. Unternehmen, die ihre IT-Landschaften diesbezüglich transformieren, müssen darum mehr denn je die Sicherheit ihrer Daten gewährleisten. Wir zeigen am Beispiel von Docker Containern, wie der IT-Grundschutz auf diese neuen Herausforderungen anzupassen ist. Wir gehen dabei insbesondere auf die Gefährdungsanalyse, Docker-spezifische Gefährdungen sowie entsprechende Maßnahmen ein.
- KonferenzbeitragIntroducing DINGfest: An architecture for next generation SIEM systems(SICHERHEIT 2018, 2018) Menges, Florian; Böhm, Fabian; Vielberth, Manfred; Puchta, Alexander; Taubmann, Benjamin; Rakotondravony, Noëlle; Latzo, TobiasIsolated and easily protectable IT systems have developed into fragile and complex structures over the past years. These systems host manifold, flexible and highly connected applications, mainly in virtual environments. To ensure protection of those infrastructures, Security Incident and Event Management (SIEM) systems have been deployed. Such systems, however, suffer from many shortcomings such as lack of mechanisms for forensic readiness. In this extended abstract, we identify these shortcomings and propose an architecture which addresses them. It is developed within the DINGfest project, on which we report and for which we seek initial feedback from the community.
- KonferenzbeitragBounded Privacy: Formalising the Trade-Off Between Privacy and Quality of Service(SICHERHEIT 2018, 2018) Hartmann, LukasMany services and applications require users to provide a certain amount of information about themselves in order to receive an acceptable quality of service (QoS). Exemplary areas of use are location based services like route planning or the reporting of security incidents for critical infrastructure. Users putting emphasis on their privacy, for example through anonymization, therefore usually suffer from a loss of QoS. Some services however, may not even be feasible above a certain threshold of anonymization, resulting in unacceptable service quality. Hence, there need to be restrictions on the applied level of anonymization. To prevent the QoS from dropping below an unacceptable threshold, we introduce the concept of Bounded Privacy, a generic model to describe situations in which the achievable level of privacy is bounded by its relation to the service quality. We furthermore propose an approach to derive the optimal level of privacy for both discrete and continuous data.
- KonferenzbeitragUsability von Security-APIs für massiv-skalierbare vernetzte Service-orientierte Systeme(SICHERHEIT 2018, 2018) Gorski, Peter LeoKontemporäre Service-orientierte Systeme sind hochgradig vernetzt und haben zudem die Eigenschaft massiv-skalierbar zu sein. Diese Charakteristiken stellen im besonderen Maße Anforderungen an die Datensicherheit der Anwender solcher Systeme und damit primär an alle Stakeholder der Softwareentwicklung, die in der Verantwortung sind, passgenaue Sicherheitsmechanismen effektiv in die Softwareprodukte zu bringen. Die Effektivität von Sicherheitsarchitekturen in service-orientierten Systemen hängt maßgeblich von der richtigen Nutzung und Integration von Security-APIs durch eine heterogene Gruppe von Softwareentwicklern ab, bei der nicht per se ein fundiertes Hintergrundwissen über komplexe digitale Sicherheitsmechanismen vorausgesetzt werden kann. Die Diskrepanz zwischen komplexen und in der Anwendung fehleranfälligen APIs und einem fehlenden Verständnis für die zugrundeliegenden Sicherheitskonzepte auf Seiten der Nutzer begünstigt in der Praxis unsichere Softwaresysteme. Aus diesem Grund ist die Gebrauchstauglichkeit von Security-APIs besonders relevant, damit Programmierer den benötigten Funktionsumfang effektiv, effizient und zufriedenstellend verwenden können. Abgeleitet von dieser Problemstellung, konzentriert sich das Dissertationsvorhaben auf die gebrauchstaugliche Ausgestaltung von Security-APIs und den Herausforderungen die sich aus den Methoden zur Evaluation der Usability in typischen Umgebungen der Softwareentwicklung ergeben.
- KonferenzbeitragComparative Usability Evaluation of Cast-as-Intended Verification Approaches in Internet Voting(SICHERHEIT 2018, 2018) Marky, Karola; Kulyk, Oksana; Volkamer, MelanieInternet Voting promises benefits like the support for voters from abroad and an overall improved accessibility. But it is accompanied by security risks like the manipulation of votes by malware. Enabling the voters to verify that their voting device casts their intended votes is a possible solution to address such a manipulation - the so-called cast-as-intended verifiability. Several different approaches for providing cast-as-intended verifiability have been proposed or put into practice. Each approach makes various assumptions about the voters' capabilities that are required in order to provide cast-as-intended verifiability. In this paper we investigate these assumptions of four chosen cast-as-intended approaches and report the impact if those are violated. Our findings indicate that the assumptions of cast-as-intended approaches (e.g. voters being capable of comparing long strings) have an impact on the security the Internet Voting systems. We discuss this impact and provide recommendations how to address the identified assumptions and give important directions in future research on usable and verifiable Internet Voting systems.
- KonferenzbeitragTurning the Table Around: Monitoring App Behavior(SICHERHEIT 2018, 2018) Momen, NurulSince Android apps receive whitecard access through permissions, users struggle to understand the actual magnitude of app access to their personal data. Due to unavailability of statistical or other tools that would provide an overview of data access or privilege use, users can hardly assess privacy risks or identify app misbehavior. This is a problem for data subjects. The presented PhD research project aims at creating a transparency-enhancing technology that helps users to assess the magnitude of data access of installed apps by monitoring the Android permission access control system. This article will present how apps exercise their permissions, based on a pilot study with an app monitoring tool. It then presents a prototypical implementation of a networked laboratory for crowdsourcing app behavior data. Finally, the article presents and discusses a model that will use the collected data to calculate and visualize risk signals based on individual risk preferences and measured app data access efforts.
- «
- 1 (current)
- 2
- 3
- »