Auflistung nach Schlagwort "Fairness"
1 - 10 von 16
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragDebiasing Vandalism Detection Models at Wikidata(INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft, 2019) Heindorf, Stefan; Scholten, Yan; Engels, Gregor; Potthast, Martin
- KonferenzbeitragFairness and Privacy in Voice Biometrics: A Study of Gender Influences Using wav2vec 2.0(BIOSIG 2023, 2023) Oubaida Chouchane, Michele PanarielloThis study investigates the impact of gender information on utility, privacy, and fairness in voice biometric systems, guided by the General Data Protection Regulation (GDPR) mandates, which underscore the need for minimizing the processing and storage of private and sensitive data, and ensuring fairness in automated decision-making systems. We adopt an approach that involves the fine-tuning of the wav2vec 2.0 model for speaker verification tasks, evaluating potential gender-related privacy vulnerabilities in the process. An adversarial technique is implemented during the fine-tuning process to obscure gender information within the speaker embeddings, thus bolstering privacy. Results from VoxCeleb datasets indicate our adversarial model increases privacy against uninformed attacks (AUC of 46.80\%), yet slightly diminishes speaker verification performance (EER of 3.89\%) compared to the non-adversarial model (EER of 2.37\%). The model's efficacy reduces against informed attacks (AUC of 96.27\%). Preliminary analysis of system performance is conducted to identify potential gender bias, thus highlighting the need for continued research to understand and enhance fairness, and the delicate interplay between utility, privacy, and fairness in voice biometric systems.
- KonferenzbeitragGeneralizability and Application of the Skin Reflectance Estimate Based on Dichromatic Separation (SREDS)(BIOSIG 2023, 2023) Joseph A Drahos, Richard PleshFace recognition (FR) systems have become widely used and readily available in recent history. However, differential performance between certain demographics has been identified within popular FR models. Skin tone differences between demographics can be one of the factors contributing to the differential performance observed in face recognition models. Skin tone metrics provide an alternative to self-reported race labels when such labels are lacking or completely not available e.g. large-scale face recognition datasets. In this work, we provide a further analysis of the generalizability of the Skin Reflectance Estimate based on Dichromatic Separation (SREDS) against other skin tone metrics and provide a use case for substituting race labels for SREDS scores in a privacy-preserving learning solution. Our findings suggest that SREDS consistently creates a skin tone metric with lower variability within each subject and SREDS values can be utilized as an alternative to the self-reported race labels at minimal drop in performance. Finally, we provide a publicly available and open-source implementation of SREDS to help the research community. Available at https://github.com/JosephDrahos/SREDS
- ZeitschriftenartikelHighly Accurate, But Still Discriminatory(Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Köchling, Alina; Riazy, Shirin; Wehner, Marius Claus; Simbeck, KatharinaThe study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced.
- KonferenzbeitragThe Influence of Unequal Chatbot Treatment on Users in Group Chat(Mensch und Computer 2022 - Tagungsband, 2022) Goetz, Marie; Wolter, Kathrin; Prilla, MichaelThe area of unfair treatment by artificial intelligences in human-AI interaction has seen frequent attention over the recent years. However, research in this area tends to target one-on-one interaction. Experiments which focus on perceived unfairness in group settings that involve an AI are mostly nonexistent. This work intends to provide insight into settings such as these through conducting a comparative study that exposes groups of people to AIs which treat parts of the participants differently than others in a cooking setting. Our results show significant differences between participants who have been treated unfairly by the AI, but also in groups not directly affected by the unfair treatment; the latter also thought worse of the AI if they felt another group partner was treated unfairly. We discuss these results and theorize about possible reasons.
- WorkshopbeitraginSIDE Fair Dialogues: Assessing and Maintaining Fairness in Human-Computer-Interaction(Mensch und Computer 2018 - Workshopband, 2018) Janzen, Sabine; Bleymehl, Ralf; Alam, Aftab; Xu, Sascha; Stein, HannahFor simulating human-like intelligence in dialogue systems, individual and partially conflicting motives of interlocutors have to be processed in dialogue planning. Little attention has been given to this topic in dialogue planning in contrast to dialogues that are fully aligned with anticipated user motives. When considering dialogues with congruent and incongruent interlocutor motives like sales dialogues, dialogue systems need to find a balance between competition and cooperation. As a means for balancing such mixed motives in dialogues, we introduce the concept of fairness defined as combination of fair-ness state and fairness maintenance process. Focusing on a dialogue between human and robot in a retailing scenario, we show the application of the SatIsficing Dialogue Engine (inSIDE) - a platform for assessing and maintaining fairness in dialogues with mixed motives.
- KonferenzbeitragLearning Analytics und Diskriminierung(DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., 2020) Wehner, Marius; Köchling, AlinaLearning Analytics (LA) wird immer häufiger als neue Beratungsquelle für Lehrende eingesetzt. Bildungseinrichtungen setzen LA in ihrem Benotungsprozess ein, um die Effizienz sowie die Objektivität zu steigern. LA kann jedoch auch zu einer ungerechten Behandlung bestimmter Personengruppen führen und somit zu impliziter Diskriminierung. Ziel der geplanten Studie ist es das Diskriminierungspotenzial mit einer experimentellen Conjointanalyse aufzuzeigen, um somit Bewusstsein für die negativen Folgen von LA zu schaffen.
- ZeitschriftenartikelNon-Discrimination-by-Design: Handlungsempfehlungen für die Entwicklung von vertrauenswürdigen KI-Services(HMD Praxis der Wirtschaftsinformatik: Vol. 59, No. 2, 2022) Rebstadt, Jonas; Kortum, Henrik; Gravemeier, Laura Sophie; Eberhardt, Birgid; Thomas, OliverNeben der menschen-induzierten Diskriminierung von Gruppen oder Einzelpersonen haben in der jüngeren Vergangenheit auch immer mehr KI-Systeme diskriminierendes Verhalten gezeigt. Beispiele hierfür sind KI-Systeme im Recruiting, die Kandidatinnen benachteiligen, Chatbots mit rassistischen Tendenzen, oder die in autonomen Fahrzeugen eingesetzte Objekterkennung, welche schwarze Menschen schlechter als weiße Menschen erkennt. Das Verhalten der KI-Systeme entsteht hierbei durch die absichtliche oder unabsichtliche Reproduktion von Vorurteilen in den genutzten Daten oder den Entwicklerteams. Da sich KI-Systeme zunehmend als integraler Bestandteil sowohl privater als auch wirtschaftlicher Lebensbereiche etablieren, müssen sich Wissenschaft und Praxis mit den ethischen Rahmenbedingungen für deren Einsatz auseinandersetzen. Daher soll im Kontext dieser Arbeit ein wirtschaftlich und wissenschaftlich relevanter Beitrag zu diesem Diskurs geleistet werden, wobei am Beispiel des Ökosystems Smart Living auf einen sehr privaten Bezug zu einer diversen Bevölkerung bezuggenommen wird. Im Rahmen der Arbeit wurden sowohl in der Literatur als auch durch Expertenbefragungen Anforderungen an KI-Systeme im Smart-Living-Ökosystem in Bezug auf Diskriminierungsfreiheit erhoben, um Handlungsempfehlungen für die Entwicklung von KI-Services abzuleiten. Die Handlungsempfehlungen sollen vor allem Praktiker dabei unterstützen, ihr Vorgehen zur Entwicklung von KI-Systemen um ethische Faktoren zu ergänzen und so die Entwicklung nicht-diskriminierender KI-Services voranzutreiben. In addition to human-induced discrimination of groups or individuals, more and more AI systems have also shown discriminatory behavior in the recent past. Examples include AI systems in recruiting that discriminate against female candidates, chatbots with racist tendencies, or the object recognition used in autonomous vehicles that shows a worse performance in recognizing black than white people. The behavior of AI systems here arises from the intentional or unintentional reproduction of pre-existing biases in the training data, but also the development teams. As AI systems increasingly establish themselves as an integral part of both private and economic spheres of life, science and practice must address the ethical framework for their use. Therefore, in the context of this work, an economically and scientifically relevant contribution to this discourse will be made, using the example of the Smart Living ecosystem to argue with a very private reference to a diverse population. In this paper, requirements for AI systems in the Smart Living ecosystem with respect to non-discrimination were collected both in the literature and through expert interviews in order to derive recommendations for action for the development of AI services. The recommendations for action are primarily intended to support practitioners in adding ethical factors to their procedural models for the development of AI systems, thus advancing the development of non-discriminatory AI services.
- WorkshopbeitragPartizipative & sozialverantwortliche Technikentwicklung(Mensch und Computer 2021 - Workshopband, 2021) Mucha, Henrik; Maas, Franzisca; Draude, Claude; Stilke, Julia; Jarke, Juliane; Bischof, Andreas; Marsden, Nicola; Berger, Arne; Wolf, Sara; Buchmüller, Sandra; Maaß, SusanneIm Workshop treffen sich Forscher*innen und Praktiker*innen zu Austausch und Diskussion u ber die Beteiligung von Nutzer*innen an Technikentwicklungsprozessen. Sie gehen dabei der Frage nach, wie Partizipation dem Anspruch auf Demokratisierung und Empowerment in Forschung und Praxis gerecht werden kann. Der Workshop dient auch als ja hrliches Treffen der Fachgruppe „Partizipation“ im Fachbereich Mensch-Computer-Interaktion (MCI) der Gesellschaft fu r Informatik (GI).
- WorkshopPartizipative und sozialverantwortliche Technikentwicklung(Mensch und Computer 2024 - Workshopband, 2024) Maas, Franzisca; Volkman, Torben; Jarke, Juliane; Berger, Arne; Bischof, Andreas; Buchmüller, Sandra; Draude, Claude; Gaertner, Wanda; Horn, Viktoria; Maaß, Susanne; Marsden, Nicola; Mucha, Henrik; Struzek, David; Stepczynski, Jan; Wolf, SaraIm Workshop treffen sich Forscher*innen und Praktiker*innen zu Austausch und Diskussion über die Beteiligung von Nutzer*innen an Technikentwicklungsprozessen. Sie gehen dabei der Frage nach, wie Partizipation dem Anspruch auf Demokratisierung und Empowerment in Forschung und Praxis gerecht werden kann. Der Workshop dient auch als jährliches Treffen der Fachgruppe „Partizipation“ im Fachbereich Mensch-Computer-Interaktion (MCI) der Gesellschaft für Informatik (GI).