Logo des Repositoriums
 

User-friendly Explanatory Dialogues

dc.contributor.authorAlizadeh, Fatemeh
dc.contributor.authorPins, Dominik
dc.contributor.authorStevens, Gunnar
dc.date.accessioned2023-08-24T06:24:29Z
dc.date.available2023-08-24T06:24:29Z
dc.date.issued2023
dc.description.abstractWhen dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.en
dc.identifier.doi10.18420/muc2023-mci-ws16-120
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/42137
dc.publisherGI
dc.relation.ispartofMensch und Computer 2023 - Workshopband
dc.relation.ispartofseriesMensch und Computer
dc.titleUser-friendly Explanatory Dialoguesen
dc.typeText/Workshop Paper
gi.conference.date3.-6. September 2023
gi.conference.locationRapperswil
gi.conference.sessiontitleMCI-WS16 - UCAI 2023: Workshop on User-Centered Artificial Intelligence

Dateien

Originalbündel
1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
muc23-mci-ws16-120.pdf
Größe:
185.23 KB
Format:
Adobe Portable Document Format