Auflistung nach Schlagwort "Interactive machine learning"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelPower to the Oracle? Design Principles for Interactive Labeling Systems in Machine Learning(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Nadj, Mario; Knaeble, Merlin; Li, Maximilian Xiling; Maedche, AlexanderLabeling is the process of enclosing information to some object. In machine learning it is required as ground truth to leverage the potential of supervised techniques. A key challenge in labeling is that users are not necessarily eager to behave as simple oracles, that is, repeatedly answering questions whether a label is right or wrong. In this respect, scholars acknowledge designing interactivity in labeling systems as a promising area for further improvements. In recent years, a considerable number of articles focusing on interactive labeling systems have been published. However, there is a lack of consolidated principles how to design these systems. In this article, we identify and discuss five design principles for interactive labeling systems based on a literature review and offer a frame for detecting common ground in the implementation of corresponding solutions. With these guidelines, we strive to contribute design knowledge for the increasingly important class of interactive labeling systems.
- ZeitschriftenartikelXAINES: Explaining AI with Narratives(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Hartmann, Mareike; Du, Han; Feldhus, Nils; Kruijff-Korbayová, Ivana; Sonntag, DanielArtificial Intelligence (AI) systems are increasingly pervasive: Internet of Things, in-car intelligent devices, robots, and virtual assistants, and their large-scale adoption makes it necessary to explain their behaviour, for example to their users who are impacted by their decisions, or to their developers who need to ensure their functionality. This requires, on the one hand, to obtain an accurate representation of the chain of events that caused the system to behave in a certain way (e.g., to make a specific decision). On the other hand, this causal chain needs to be communicated to the users depending on their needs and expectations. In this phase of explanation delivery, allowing interaction between user and model has the potential to improve both model quality and user experience. The XAINES project investigates the explanation of AI systems through narratives targeted to the needs of a specific audience, focusing on two important aspects that are crucial for enabling successful explanation: generating and selecting appropriate explanation content, i.e. the information to be contained in the explanation, and delivering this information to the user in an appropriate way. In this article, we present the project’s roadmap towards enabling the explanation of AI with narratives.