Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- ZeitschriftenartikelBeyond Distributed Artificial Intelligence(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Klügl, Franziska
- ZeitschriftenartikelReconfigurable Autonomy(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Dennis, Louise A.; Fisher, Michael; Aitken, Jonathan M.; Veres, Sandor M.; Gao, Yang; Shaukat, Affan; Burroughes, GuyThis position paper describes ongoing work at the Universities of Liverpool, Sheffield and Surrey in the UK on developing hybrid agent architectures for controlling autonomous systems, and specifically for ensuring that agent-controlled dynamic reconfiguration is viable. The work outlined here forms part of the Reconfigurable Autonomy research project.
- ZeitschriftenartikelSpecial Issue on Multi-Agent Decision Making(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bulling, Nils
- ZeitschriftenartikelResponsible Intelligent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Broersen, JanThe 2013 ERC-consolidator project “Responsible Intelligent Systems” proposes to develop a formal framework for automating responsibility, liability and risk checking for intelligent systems. The goal is to answer three central questions, corresponding to three sub-projects of the proposal: (1) What are suitable formal logical representation formalisms for knowledge of agentive responsibility in action, interaction and joint action? (2) How can we formally reason about the evaluation of grades of responsibility and risks relative to normative systems? (3) How can we perform computational checks of responsibilities in complex intelligent systems interacting with human agents? To answer the first two questions, we will design logical specification languages for collective responsibilities and for probability-based graded responsibilities, relative to normative systems. To answer the third question, we will design suitable translations to related logical formalisms, for which optimised model checkers and theorem provers exist. All three answers will contribute to the central goal of the project as a whole: designing the blueprints for a formal responsibility checking system. To reach that goal the project will combine insights from three disciplines: philosophy, legal theory and computer science.
- ZeitschriftenartikelMeasuring Inconsistency in Multi-Agent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Hunter, A.; Parsons, S.; Wooldridge, M.We introduce and investigate formal quantitative measures of inconsistency between the beliefs of agents in multi-agent systems. We start by recalling a well-known model of belief in multi-agent systems, and then, using this model, present two classes of inconsistency metrics. First, we consider metrics that attempt to characterise the overall degree of inconsistency of a multi-agent system in a single numeric value, where inconsistency is considered to be individuals within the system having contradictory beliefs. While this metric is useful as a high-level indicator of the degree of inconsistency between the beliefs of members of a multi-agent system, it is of limited value for understanding the structure of inconsistency in a system: it gives no indication of the sources of inconsistency. We therefore introduce metrics that quantify for a given individual the extent to which that individual is in conflict with other members of the society. These metrics are based on power indices, which were developed within the cooperative game theory community in order to understand the power that individuals wield in cooperative settings.
- ZeitschriftenartikelBeyond Reinforcement Learning and Local View in Multiagent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bazzan, Ana L. C.Learning is an important component of an agent’s decision making process. Despite many messages in contrary, the fact is that, currently, in the multiagent community it is mostly likely that learning means reinforcement learning. Given this background, this paper has two aims: to revisit the “old days” motivations for multiagent learning, and to describe some of the work addressing the frontiers of multiagent systems and machine learning. The intention of the latter task is to try to motivate people to address the issues that are involved in the application of techniques from multiagent systems in machine learning and vice-versa.
- ZeitschriftenartikelInterview with Professor Sarit Kraus(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bulling, Nils
- ZeitschriftenartikelA Survey of Multi-Agent Decision Making(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bulling, NilsIn this article we give a high-level overview of various aspects relevant to multi-agent decision making. Classical decision theory makes the start. Then, we introduce multi-agent decision making, focussing on game theory, complex decision making, and on intelligent agents. Afterwards, we discuss methods for reaching agreements interactively, e.g. by negotiation, bargaining, and argumentation, followed by approaches to coordinate and to control agents’ decision making.
- ZeitschriftenartikelStrategic Argumentation in Multi-Agent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Thimm, MatthiasArgumentation-based negotiation describes the process of decision-making in multi-agent systems through the exchange of arguments. If agents only have partial knowledge about the subject of a dialogue strategic argumentation can be used to exploit weaknesses in the argumentation of other agents and thus to persuade other agents of a specific opinion and reach a certain outcome. This paper gives an overview of the field of strategic argumentation and surveys recent works and developments. We provide a general discussion of the problem of strategic argumentation in multi-agent settings and discuss approaches to strategic argumentation, in particular strategies based on opponent models.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014)