Workshopbeitrag
ChatAnalysis: Can GPT-4 undermine Privacy in Smart Homes with Data Analysis?
Lade...
Volltext URI
Dokumententyp
Text/Workshop Paper
Zusatzinformation
Datum
2024
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Interacting with Large Language Models (LLMs) has become as easy as sending a text message to a friend, contributing to their widespread use in various fields. We can ask them questions, instruct them to imitate characters, or have them generate images with just a few sentences. Document loaders for LLMs allow us to send tables or raw sensor data, which can then be analyzed to answer any questions we may have. Historically, such data analysis required specialized knowledge of statistics, programming, and data handling, which has been a barrier for many. This study examines the extent to which this barrier is falling due to the capabilities of LLMs, and to which extent it facilitates the misuse of data to undermine privacy. We examine the ability of GPT-4 to interpret human activity through smart home sensor data. Specifically, we evaluate how effectively GPT-4 can infer daily activities, generalize behavior, and detect anomalies in human behavior. Our methodology utilizes sensor data collected over time from wide-spread connected devices such as lamps, motion sensors, doors, and thermometers, which are adapted from the CASAS project dataset. Our key findings show that while GPT-4 can infer certain daily activities and detect anomalies, its ability to generalize behavior patterns over a week is notably limited.