Logo des Repositoriums
 

Responsible Intelligent Systems

dc.contributor.authorBroersen, Jan
dc.date.accessioned2018-01-08T09:17:18Z
dc.date.available2018-01-08T09:17:18Z
dc.date.issued2014
dc.description.abstractThe 2013 ERC-consolidator project “Responsible Intelligent Systems” proposes to develop a formal framework for automating responsibility, liability and risk checking for intelligent systems. The goal is to answer three central questions, corresponding to three sub-projects of the proposal: (1) What are suitable formal logical representation formalisms for knowledge of agentive responsibility in action, interaction and joint action? (2) How can we formally reason about the evaluation of grades of responsibility and risks relative to normative systems? (3) How can we perform computational checks of responsibilities in complex intelligent systems interacting with human agents? To answer the first two questions, we will design logical specification languages for collective responsibilities and for probability-based graded responsibilities, relative to normative systems. To answer the third question, we will design suitable translations to related logical formalisms, for which optimised model checkers and theorem provers exist. All three answers will contribute to the central goal of the project as a whole: designing the blueprints for a formal responsibility checking system. To reach that goal the project will combine insights from three disciplines: philosophy, legal theory and computer science.
dc.identifier.pissn1610-1987
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/11421
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 28, No. 3
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectKnowledge representation
dc.subjectLogics of agency
dc.subjectNormative systems
dc.subjectRisk and liability
dc.titleResponsible Intelligent Systems
dc.typeText/Journal Article
gi.citation.endPage214
gi.citation.startPage209

Dateien