Bruchhaus, SebastianReis, ThoralfBornschlegl, Marco XaverStörl, UtaHemmje, MatthiasKönig-Ries, BirgittaScherzinger, StefanieLehner, WolfgangVossen, Gottfried2023-02-232023-02-232023978-3-88579-725-8https://dl.gi.de/handle/20.500.12116/40369Machine learning (ML) thrives on big data like huge data sets and streams from IOT devices. Those technologies are becoming increasingly commonplace in our day to day existence. Learning autonomous intelligent actors (AIAs) impact our lives already in the form of, e.g. chat bots, medical expert systems, and facial recognition systems. Doubts concerning ethical, legal, and social implications of such AIAs become increasingly compelling in consequence. Our society now finds itself confronted with decisive questions: Should we trust AI? Is it fair, transparent, and respecting privacy? An individual psychological threshold for cooperation with AIAs has been postulated. In Shaefer’s words: “No trust, no use”. On the other hand, ignorance of an AIA’s weak points and idiosyncrasies can lead to overreliance. This paper proposes a prototypical microservice architecture for trustability analytics. Its architecture shall introduce self-awareness concerning trustability into the AI2VIS4BigData reference model for big data analysis and visualization by borrowing the concept of a “looking-glass self” from psychology.enTrustMachine LearningDigital HumanitiesFoundation ModelTransparencyXAITowards a User-Empowering Architecture for Trustability AnalyticsText/Conference Paper10.18420/BTW2023-60