Konferenzbeitrag
LLMs on the Edge: Quality, Latency, and Energy Efficiency
Lade...
Volltext URI
Dokumententyp
Text/Conference Paper
Zusatzinformation
Datum
2024
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Quelle
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Generative Artificial Intelligence has become integral to many people's lives, with Large Language Models (LLMs) gaining popularity in both science and society. While training these models is known to require significant energy, inference also contributes substantially to their total energy consumption. This study investigates how to use LLMs sustainably by examining the efficiency of inference, particularly on local hardware with limited computing resources. We develop metrics to quantify the efficiency of LLMs on edge devices, focusing on quality, latency, and energy consumption. Our comparison of three state-of-the-art generative models on edge devices shows that they achieve quality scores ranging from 73.3% to 85.9%, generate 1.83 to 3.51 tokens per second, and consume between 0.93 and 1.76 mWh of energy per token on a single-board computer without GPU support. The findings suggest that generative models can produce satisfactory outcomes on edge devices, but thorough efficiency evaluations are recommended before deployment in production environments.