Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations
dc.contributor.author | Böhmer, Wendelin | |
dc.contributor.author | Springenberg, Jost Tobias | |
dc.contributor.author | Boedecker, Joschka | |
dc.contributor.author | Riedmiller, Martin | |
dc.contributor.author | Obermayer, Klaus | |
dc.date.accessioned | 2018-01-08T09:18:05Z | |
dc.date.available | 2018-01-08T09:18:05Z | |
dc.date.issued | 2015 | |
dc.description.abstract | This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements. | |
dc.identifier.pissn | 1610-1987 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/11481 | |
dc.publisher | Springer | |
dc.relation.ispartof | KI - Künstliche Intelligenz: Vol. 29, No. 4 | |
dc.relation.ispartofseries | KI - Künstliche Intelligenz | |
dc.subject | Autonomous robotics | |
dc.subject | Deep auto-encoder networks | |
dc.subject | End-to-end reinforcement learning | |
dc.subject | Representation learning | |
dc.subject | Slow feature analysis | |
dc.title | Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations | |
dc.type | Text/Journal Article | |
gi.citation.endPage | 362 | |
gi.citation.startPage | 353 |