Schmidmaier, MatthiasHarrich, CedrikMayer, Sven2024-08-212024-08-212024https://dl.gi.de/handle/20.500.12116/44317Today, interaction with LLM-based agents is mainly based on text or voice interaction. Currently, we explore how nonverbal cues and affective information can augment this interaction in order to create empathic, context-aware agents. For that, we extend user prompts with input from different modalities and varying levels of abstraction. In detail, we investigate the potential of extending the input into LLMs beyond text or voice, similar to human-human interaction in which humans not only rely on the simple text that was uttered by a conversion partner but also on nonverbal cues. As a result, we envision that cameras can pick up facial expressions from the user, which can then be fed into the LLM communication as an additional input channel fostering context awareness. In this work we introduce our application ideas and implementations, preliminary findings, and discuss arising challenges.enhttps://creativecommons.org/licenses/by/4.0/Increasing Large Language Models Context Awareness through Nonverbal CuesText/Workshop Paper10.18420/muc2024-mci-ws09-190