Reinhard, Philipp2024-08-212024-08-212024https://dl.gi.de/handle/20.500.12116/44242Generative artificial intelligence (GenAI), particularly large language models (LLMs), offer new capabilities of natural language understanding and generation, potentially reducing employee stress and high turnover rates in customer service delivery. However, these systems also present risks, such as generating convincing but erroneous responses, known as hallucinations and confabulations. Thus, this study investigates the impact of GenAI on service performance in customer support settings, emphasizing augmentation over automation to address three key inquiries: identifying patterns of GenAI infusion that alter service routines, assessing the effects of human-AI interaction on cognitive load and task performance, and evaluating the role of explainable AI (XAI) in detecting erroneous responses such as hallucinations. Employing a design science research approach, the study combines literature reviews, expert interviews, and experimental designs to derive implications for designing GenAI-driven augmentation. Preliminary findings reveal three key insights: (1) Service employees play a critical role in retaining organizational knowledge and delegating decisions to GenAI agents; (2) Utilizing GenAI co-pilots significantly reduces the cognitive load during stressful customer interactions; and (3) Novice employees face challenges in discerning accurate AI-generated advice from inaccurate suggestions without additional explanatory context.enhttp://purl.org/eprint/accessRights/RestrictedAccessGenerative AILarge Language ModelsCognitive LoadExplainable AIAugmentation through Generative AI: Exploring the Effects of Human-AI Interaction and Explainable AI on Service PerformanceText/Conference Paper10.18420/muc2024-mci-dc-360