Logo des Repositoriums
 
Textdokument

Evaluating Dangerous Capabilities of Large Language Models: An Examination of Situational Awareness

Lade...
Vorschaubild

Volltext URI

Dokumententyp

Text

Zusatzinformation

Datum

2023

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Gesellschaft für Informatik e.V.

Zusammenfassung

The focal point of this research proposal pertains to a thorough examination of the inherent risks and potential challenges associated with the use of Large Language Models (LLMs). Emphasis has been laid on the facet of situational awareness, an attribute signifying a model’s understanding of its environment, its own state, and the implications of its actions. The proposed research aims to design a robust and reliable metric system and a methodology to gauge situational awareness, followed by an in-depth analysis of major LLMs using this developed metric. The intention is to pinpoint any latent hazards and suggest effective strategies to mitigate these issues, with the ultimate goal of promoting the responsible and secure advancement of artificial intelligence technologies.

Beschreibung

Yadav, Dipendra (2023): Evaluating Dangerous Capabilities of Large Language Models: An Examination of Situational Awareness. DC@KI2023: Proceedings of Doctoral Consortium at KI 2023. DOI: 10.18420/ki2023-dc-10. Gesellschaft für Informatik e.V.. pp. 88-93. Doctoral Consortium at KI 2023. Berlin. 45195

Schlagwörter

Zitierform

Tags