Show simple item record

dc.contributor.authorBöhmer, Wendelin
dc.contributor.authorSpringenberg, Jost Tobias
dc.contributor.authorBoedecker, Joschka
dc.contributor.authorRiedmiller, Martin
dc.contributor.authorObermayer, Klaus
dc.date2015-11-01
dc.date.accessioned2018-01-08T09:18:05Z
dc.date.available2018-01-08T09:18:05Z
dc.date.issued2015
dc.identifier.issn1610-1987
dc.identifier.urihttp://dl.gi.de/handle/20.500.12116/11481
dc.description.abstractThis article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 29, No. 4
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectAutonomous robotics
dc.subjectDeep auto-encoder networks
dc.subjectEnd-to-end reinforcement learning
dc.subjectRepresentation learning
dc.subjectSlow feature analysis
dc.titleAutonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations
dc.typeText/Journal Article
mci.reference.pages353-362
gi.identifier.doi10.1007/s13218-015-0356-1


Files in this item

FilesSizeFormatView

There are no files associated with this item.

Show simple item record