Auflistung nach Autor:in "Vitay, Julien"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelA Neuroscientific View on the Role of Emotions in Behaving Cognitive Agents(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Vitay, Julien; Hamker, Fred H.While classical theories systematically opposed emotion and cognition, suggesting that emotions perturbed the normal functioning of the rational thought, recent progress in neuroscience highlights on the contrary that emotional processes are at the core of cognitive processes, directing attention to emotionally-relevant stimuli, favoring the memorization of external events, valuating the association between an action and its consequences, biasing decision making by allowing to compare the motivational value of different goals and, more generally, guiding behavior towards fulfilling the needs of the organism. This article first proposes an overview of the brain areas involved in the emotional modulation of behavior and suggests a functional architecture allowing to perform efficient decision making. It then reviews a series of biologically-inspired computational models of emotion dealing with behavioral tasks like classical conditioning and decision making, which highlight the computational mechanisms involved in emotional behavior. It underlines the importance of embodied cognition in artificial intelligence, as emotional processing is at the core of the cognitive computations deciding which behavior is more appropriate for the agent.
- TextdokumentTraining a deep policy gradient-based neural network with asynchronous learners on a simulated robotic problem(INFORMATIK 2017, 2017) Lötzsch, Winfried; Vitay, Julien; Hamker, FredRecent advances in deep reinforcement learning methods have attracted a lot of attention, because of their ability to use raw signals such as video streams as inputs, instead of pre-processed state variables. However, the most popular methods (value-based methods, e.g. deep Q-networks) focus on discrete action spaces (e.g. the left/right buttons), while realistic robotic applications usually require a continuous action space (for example the joint space). Policy gradient methods, such as stochastic policy gradient or deep deterministic policy gradient, propose to overcome this problem by allowing continuous action spaces. Despite their promises, they suffer from long training times as they need huge numbers of interactions to converge. In this paper, we investigate in how far a recent asynchronously parallel actor-critic approach, initially proposed to speed up discrete RL algorithms, could be used for the continuous control of robotic arms. We demonstrate the capabilities of this end-to-end learning algorithm on a simulated 2 degrees-of-freedom robotic arm and discuss its applications to more realistic scenarios.