Evaluating Dangerous Capabilities of Large Language Models: An Examination of Situational Awareness
dc.contributor.author | Yadav, Dipendra | |
dc.contributor.editor | Stolzenburg, Frieder | |
dc.date.accessioned | 2023-09-20T04:20:43Z | |
dc.date.available | 2023-09-20T04:20:43Z | |
dc.date.issued | 2023 | |
dc.description.abstract | The focal point of this research proposal pertains to a thorough examination of the inherent risks and potential challenges associated with the use of Large Language Models (LLMs). Emphasis has been laid on the facet of situational awareness, an attribute signifying a model’s understanding of its environment, its own state, and the implications of its actions. The proposed research aims to design a robust and reliable metric system and a methodology to gauge situational awareness, followed by an in-depth analysis of major LLMs using this developed metric. The intention is to pinpoint any latent hazards and suggest effective strategies to mitigate these issues, with the ultimate goal of promoting the responsible and secure advancement of artificial intelligence technologies. | en |
dc.identifier.doi | 10.18420/ki2023-dc-10 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/42398 | |
dc.language.iso | en | |
dc.pubPlace | Bonn | |
dc.publisher | Gesellschaft für Informatik e.V. | |
dc.relation.ispartof | DC@KI2023: Proceedings of Doctoral Consortium at KI 2023 | |
dc.title | Evaluating Dangerous Capabilities of Large Language Models: An Examination of Situational Awareness | en |
dc.type | Text | |
gi.citation.endPage | 93 | |
gi.citation.startPage | 88 | |
gi.conference.date | 45195 | |
gi.conference.location | Berlin | |
gi.conference.sessiontitle | Doctoral Consortium at KI 2023 | |
gi.document.quality | digidoc |
Dateien
Originalbündel
1 - 1 von 1