Auflistung nach Schlagwort "Trustworthy AI"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragBridging the Gap: The Role of OWASP AI Exchange in AI Standardization(INFORMATIK 2024, 2024) Bunzel, Niklas; Göller, NicolasIn the rapidly evolving landscape of artificial intelligence (AI), the challenge of establishing a unified framework for AI regularization and standardization is increasingly critical. Standardization organizations worldwide, while striving to create guidelines for trustworthy AI, often diverge in their approaches and terminologies. This divergence leads to significant challenges for legislators in enacting comprehensive laws, such as the EU AI Act, and poses even greater challenges for companies expected to comply with these laws and diverse standards. Amidst this complexity, the Open Worldwide Application Security Project (OWASP) AI Exchange emerges as a pivotal solution. This initiative seeks to harmonize AI security standards and practices, thereby providing a much-needed bridge between varying regulatory expectations and practical implementation strategies for AI. This research paper delves into the role of the OWASP AI Exchange in simplifying and standardizing the realm of trustworthy AI, providing a cohesive framework that benefits legislators, industries, and the broader AI community.
- KonferenzbeitragEnsuring trustworthy AI for sensitive infrastructure using Knowledge Representation(INFORMATIK 2024, 2024) Mejri, Oumayma; Waedt, Karl; Yatagha, Romarick; Edeh, Natasha; Sebastiao, Claudia LemosArtificial intelligence (AI) has become increasingly integrated into various aspects of society, from healthcare and finance to law enforcement and hiring processes. More recently, sensitive infrastructure such as nuclear plants is engaging AI in aspects of safety. However, these systems are not immune to biases and ethical concerns. This paper explores the role of knowledge representation in addressing ethics and fairness in AI, examining how biased or incomplete representations can lead to unfair outcomes and unreliable decision-making. It proposes strategies to mitigate these risks.
- TextdokumentGAFAI: Proposal of a Generalized Audit Framework for AI(INFORMATIK 2022, 2022) Markert,Thora; Langer,Fabian; Danos,VasiliosML based AI applications are increasingly used in various fields and domains. Despite the enormous and promising capabilities of ML, the inherent lack of robustness, explainability and transparency limits the potential use cases of AI systems. In particular, within every safety or security critical area, such limitations require risk considerations and audits to be compliant with the prevailing safety and security demands. Unfortunately, existing standards and audit schemes do not completely cover the ML specific issues and lead to challenging or incomplete mapping of the ML functionality to the existing methodologies. Thus, we propose a generalized audit framework for ML based AI applications (GAFAI) as an anticipation and assistance to achieve auditability. This conceptual risk and requirement driven approach based on sets of generalized requirements and their corresponding application specific refinements as contributes to close the gaps in auditing AI.
- TextdokumentTowards the Operationalization of Trustworthy AI: Integrating the EU Assessment List into a Procedure Model for the Development and Operation of AI-Systems(INFORMATIK 2022, 2022) Kortum,Henrik; Rebstadt,Jonas; Böschen,Tula; Meier,Pascal; Thomas,OliverArtificial intelligence (AI) is increasingly permeating all areas of life and not only changing coexistence in society for the better. Unfortunately, there is an increasing number of examples where AI systems show problematic behavior, such as discrimination or insufficient accuracy, missing data privacy or transparency. To counteract this trend, an EU initiative has drafted a legal framework and recommendations on how AI can be more trustworthy and comply with people's fundamental rights. However, fundamental rights are currently not reflected in procedure models for the development and operation of AI systems. Our work contributes to closing this gap so that companies, especially SMEs with small IT departments and limited financial resources, are supported in the development process. Within the framework of a structured literature review, we derive a procedure model for the development and operation of AI systems and subsequently integrate concrete recommendations for achieving trustworthiness.
- KonferenzbeitragVERIFAI - A Step Towards Evaluating the Responsibility of AI-Systems(BTW 2023, 2023) Göllner, Sabrina; Tropmann-Frick, MarinaThis work represents the first step towards a unified framework for evaluating an AI system's responsibility by building a prototype application.The python based web-application uses several libraries for testing the fairness, robustness, privacy, and explainability of a machine-learning model as well as the dataset which was used for training the model.The workflow of the prototype is tested and described using images of a healthcare dataset since healthcare represents an area where automatic decisions affect decisions about human lives, and building responsible AI in this area is therefore indispensable.