Konferenzbeitrag
Towards a User-Empowering Architecture for Trustability Analytics
Lade...
Volltext URI
Dokumententyp
Text/Conference Paper
Dateien
Zusatzinformation
Datum
2023
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Quelle
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Machine learning (ML) thrives on big data like huge data sets and streams from IOT devices. Those technologies are becoming increasingly commonplace in our day to day existence. Learning autonomous intelligent actors (AIAs) impact our lives already in the form of, e.g. chat bots, medical expert systems, and facial recognition systems. Doubts concerning ethical, legal, and social implications of such AIAs become increasingly compelling in consequence. Our society now finds itself confronted with decisive questions: Should we trust AI? Is it fair, transparent, and respecting privacy? An individual psychological threshold for cooperation with AIAs has been postulated. In Shaefer’s words: “No trust, no use”. On the other hand, ignorance of an AIA’s weak points and idiosyncrasies can lead to overreliance. This paper proposes a prototypical microservice architecture for trustability analytics. Its architecture shall introduce self-awareness concerning trustability into the AI2VIS4BigData reference model for big data analysis and visualization by borrowing the concept of a “looking-glass self” from psychology.