Auflistung Künstliche Intelligenz 28(3) - August 2014 nach Erscheinungsdatum
1 - 10 von 14
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelBeyond Reinforcement Learning and Local View in Multiagent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bazzan, Ana L. C.Learning is an important component of an agent’s decision making process. Despite many messages in contrary, the fact is that, currently, in the multiagent community it is mostly likely that learning means reinforcement learning. Given this background, this paper has two aims: to revisit the “old days” motivations for multiagent learning, and to describe some of the work addressing the frontiers of multiagent systems and machine learning. The intention of the latter task is to try to motivate people to address the issues that are involved in the application of techniques from multiagent systems in machine learning and vice-versa.
- ZeitschriftenartikelInterview with Professor Sarit Kraus(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bulling, Nils
- ZeitschriftenartikelA Survey of Multi-Agent Decision Making(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bulling, NilsIn this article we give a high-level overview of various aspects relevant to multi-agent decision making. Classical decision theory makes the start. Then, we introduce multi-agent decision making, focussing on game theory, complex decision making, and on intelligent agents. Afterwards, we discuss methods for reaching agreements interactively, e.g. by negotiation, bargaining, and argumentation, followed by approaches to coordinate and to control agents’ decision making.
- ZeitschriftenartikelStrategic Argumentation in Multi-Agent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Thimm, MatthiasArgumentation-based negotiation describes the process of decision-making in multi-agent systems through the exchange of arguments. If agents only have partial knowledge about the subject of a dialogue strategic argumentation can be used to exploit weaknesses in the argumentation of other agents and thus to persuade other agents of a specific opinion and reach a certain outcome. This paper gives an overview of the field of strategic argumentation and surveys recent works and developments. We provide a general discussion of the problem of strategic argumentation in multi-agent settings and discuss approaches to strategic argumentation, in particular strategies based on opponent models.
- ZeitschriftenartikelBeyond Distributed Artificial Intelligence(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Klügl, Franziska
- ZeitschriftenartikelReconfigurable Autonomy(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Dennis, Louise A.; Fisher, Michael; Aitken, Jonathan M.; Veres, Sandor M.; Gao, Yang; Shaukat, Affan; Burroughes, GuyThis position paper describes ongoing work at the Universities of Liverpool, Sheffield and Surrey in the UK on developing hybrid agent architectures for controlling autonomous systems, and specifically for ensuring that agent-controlled dynamic reconfiguration is viable. The work outlined here forms part of the Reconfigurable Autonomy research project.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014)
- ZeitschriftenartikelSpecial Issue on Multi-Agent Decision Making(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Bulling, Nils
- ZeitschriftenartikelGerhard Weiss (ed.): Multiagent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Kaźmierczak, Piotr
- ZeitschriftenartikelResponsible Intelligent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Broersen, JanThe 2013 ERC-consolidator project “Responsible Intelligent Systems” proposes to develop a formal framework for automating responsibility, liability and risk checking for intelligent systems. The goal is to answer three central questions, corresponding to three sub-projects of the proposal: (1) What are suitable formal logical representation formalisms for knowledge of agentive responsibility in action, interaction and joint action? (2) How can we formally reason about the evaluation of grades of responsibility and risks relative to normative systems? (3) How can we perform computational checks of responsibilities in complex intelligent systems interacting with human agents? To answer the first two questions, we will design logical specification languages for collective responsibilities and for probability-based graded responsibilities, relative to normative systems. To answer the third question, we will design suitable translations to related logical formalisms, for which optimised model checkers and theorem provers exist. All three answers will contribute to the central goal of the project as a whole: designing the blueprints for a formal responsibility checking system. To reach that goal the project will combine insights from three disciplines: philosophy, legal theory and computer science.