Logo des Repositoriums
 

Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations

dc.contributor.authorBöhmer, Wendelin
dc.contributor.authorSpringenberg, Jost Tobias
dc.contributor.authorBoedecker, Joschka
dc.contributor.authorRiedmiller, Martin
dc.contributor.authorObermayer, Klaus
dc.date.accessioned2018-01-08T09:18:05Z
dc.date.available2018-01-08T09:18:05Z
dc.date.issued2015
dc.description.abstractThis article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.
dc.identifier.pissn1610-1987
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/11481
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 29, No. 4
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectAutonomous robotics
dc.subjectDeep auto-encoder networks
dc.subjectEnd-to-end reinforcement learning
dc.subjectRepresentation learning
dc.subjectSlow feature analysis
dc.titleAutonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations
dc.typeText/Journal Article
gi.citation.endPage362
gi.citation.startPage353

Dateien