Workshopbeitrag
Increasing Large Language Models Context Awareness through Nonverbal Cues
Lade...
Volltext URI
Dokumententyp
Text/Workshop Paper
Zusatzinformation
Datum
2024
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Today, interaction with LLM-based agents is mainly based on text or voice interaction. Currently, we explore how nonverbal cues and affective information can augment this interaction in order to create empathic, context-aware agents. For that, we extend user prompts with input from different modalities and varying levels of abstraction. In detail, we investigate the potential of extending the input into LLMs beyond text or voice, similar to human-human interaction in which humans not only rely on the simple text that was uttered by a conversion partner but also on nonverbal cues. As a result, we envision that cameras can pick up facial expressions from the user, which can then be fed into the LLM communication as an additional input channel fostering context awareness. In this work we introduce our application ideas and implementations, preliminary findings, and discuss arising challenges.