Logo des Repositoriums
 

From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users

dc.contributor.authorLeiser, Florian
dc.contributor.authorEckhardt, Sven
dc.contributor.authorKnaeble, Merlin
dc.contributor.authorMaedche, Alexander
dc.contributor.authorSchwabe, Gerhard
dc.contributor.authorSunyaev, Ali
dc.contributor.editorStolze, Markus
dc.contributor.editorLoch, Frieder
dc.contributor.editorBaldauf, Matthias
dc.contributor.editorAlt, Florian
dc.contributor.editorSchneegass, Christina
dc.contributor.editorKosch, Thomas
dc.contributor.editorHirzle, Teresa
dc.contributor.editorSadeghian, Shadan
dc.contributor.editorDraxler, Fiona
dc.contributor.editorBektas, Kenan
dc.contributor.editorLohan, Katrin
dc.contributor.editorKnierim, Pascal
dc.date.accessioned2023-08-24T05:29:13Z
dc.date.available2023-08-24T05:29:13Z
dc.date.issued2023
dc.description.abstractLarge language models (LLMs) like ChatGPT recently gained interest across all walks of life with their human-like quality in textual responses. Despite their success in research, healthcare, or education, LLMs frequently include incorrect information, called hallucinations, in their responses. These hallucinations could influence users to trust fake news or change their general beliefs. Therefore, we investigate mitigation strategies desired by users to enable identification of LLM hallucinations. To achieve this goal, we conduct a participatory design study where everyday users design interface features which are then assessed for their feasibility by machine learning (ML) experts. We find that many of the desired features are well-perceived by ML experts but are also considered as difficult to implement. Finally, we provide a list of desired features that should serve as a basis for mitigating the effect of LLM hallucinations on users.en
dc.description.uri"https://dl.acm.org/doi/"&R8en
dc.identifier.doi10.1145/3603555.3603565
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/42041
dc.language.isoen
dc.publisherACM
dc.relation.ispartofMensch und Computer 2023 - Tagungsband
dc.relation.ispartofseriesMensch und Computer
dc.subjectChatGPT
dc.subject Large Language Models
dc.subject Disney Method
dc.subject Participatory Design
dc.subject Artificial Hallucinations
dc.titleFrom ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Usersen
dc.typeText/Conference Paper
gi.citation.publisherPlaceNew York
gi.citation.startPage81-90
gi.conference.date3.-6. September 2023
gi.conference.locationRapperswil
gi.conference.sessiontitleMCI-SE02: Method Development & Exploration

Dateien