Logo des Repositoriums
 
Konferenzbeitrag

LLMs on the Edge: Quality, Latency, and Energy Efficiency

Lade...
Vorschaubild

Volltext URI

Dokumententyp

Text/Conference Paper

Zusatzinformation

Datum

2024

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Gesellschaft für Informatik e.V.

Zusammenfassung

Generative Artificial Intelligence has become integral to many people's lives, with Large Language Models (LLMs) gaining popularity in both science and society. While training these models is known to require significant energy, inference also contributes substantially to their total energy consumption. This study investigates how to use LLMs sustainably by examining the efficiency of inference, particularly on local hardware with limited computing resources. We develop metrics to quantify the efficiency of LLMs on edge devices, focusing on quality, latency, and energy consumption. Our comparison of three state-of-the-art generative models on edge devices shows that they achieve quality scores ranging from 73.3% to 85.9%, generate 1.83 to 3.51 tokens per second, and consume between 0.93 and 1.76 mWh of energy per token on a single-board computer without GPU support. The findings suggest that generative models can produce satisfactory outcomes on edge devices, but thorough efficiency evaluations are recommended before deployment in production environments.

Beschreibung

Bast, Sebastian; Begic Fazlic, Lejla; Naumann, Stefan; Dartmann, Guido (2024): LLMs on the Edge: Quality, Latency, and Energy Efficiency. INFORMATIK 2024. DOI: 10.18420/inf2024_104. Bonn: Gesellschaft für Informatik e.V.. PISSN: 1617-5468. ISBN: 978-3-88579-746-3. pp. 1183-1192. 5. Workshop "KI in der Umweltinformatik" (KIU-2024). Wiesbaden. 24.-26. September 2024

Zitierform

Tags