User-friendly Explanatory Dialogues
dc.contributor.author | Alizadeh, Fatemeh | |
dc.contributor.author | Pins, Dominik | |
dc.contributor.author | Stevens, Gunnar | |
dc.date.accessioned | 2023-08-24T06:24:29Z | |
dc.date.available | 2023-08-24T06:24:29Z | |
dc.date.issued | 2023 | |
dc.description.abstract | When dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions. | en |
dc.identifier.doi | 10.18420/muc2023-mci-ws16-120 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/42137 | |
dc.publisher | GI | |
dc.relation.ispartof | Mensch und Computer 2023 - Workshopband | |
dc.relation.ispartofseries | Mensch und Computer | |
dc.title | User-friendly Explanatory Dialogues | en |
dc.type | Text/Workshop Paper | |
gi.conference.date | 3.-6. September 2023 | |
gi.conference.location | Rapperswil | |
gi.conference.sessiontitle | MCI-WS16 - UCAI 2023: Workshop on User-Centered Artificial Intelligence |
Dateien
Originalbündel
1 - 1 von 1
Lade...
- Name:
- muc23-mci-ws16-120.pdf
- Größe:
- 185.23 KB
- Format:
- Adobe Portable Document Format