Rudschies, CatharinaRings, SebastianKruse, LucieSchauenburg, GescheMarmarshahi, HamedZimmer, Christian-Norbert2023-08-242023-08-242023https://dl.gi.de/handle/20.500.12116/42085Intelligent virtual agents (IVAs) are currently studied in health-related research for their ability to enhance accessibility, availability, and to support clinical treatments. With the rapid improvements in conversational skills of IVAs triggered by large language models (LLMs) such as ChatGPT, the possibilities are explored to use IVAs in mental healthcare. However, the adoption of IVAs in psychotherapeutic contexts begs to discuss the technical and ethical challenges that go along. Limitations such as bias, confabulation, and a lack of explainability and accountability raise serious concerns in a sensitive context like mental healthcare. In this positional paper we elaborate on these limitations of LLMs when used for IVAs and discuss some of the ethical implications that need to be considered and addressed respectively.Psychotherapy with the Help of ChatGPT? Current Technical and Ethical Boundaries of Intelligent Virtual AgentsText/Workshop Paper10.18420/muc2023-mci-ws06-367