Accounting for Task-Difficulty in Active Multi-Task Robot Control Learning
dc.contributor.author | Fabisch, Alexander | |
dc.contributor.author | Metzen, Jan Hendrik | |
dc.contributor.author | Krell, Mario Michael | |
dc.contributor.author | Kirchner, Frank | |
dc.date.accessioned | 2018-01-08T09:18:05Z | |
dc.date.available | 2018-01-08T09:18:05Z | |
dc.date.issued | 2015 | |
dc.description.abstract | Contextual policy search is a reinforcement learning approach for multi-task learning in the context of robot control learning. It can be used to learn versatilely applicable skills that generalize over a range of tasks specified by a context vector. In this work, we combine contextual policy search with ideas from active learning for selecting the task in which the next trial will be performed. Moreover, we use active training set selection for reducing detrimental effects of exploration in the sampling policy. A core challenge in this approach is that the distribution of the obtained rewards may not be directly comparable between different tasks. We propose the novel approach PUBSVE for estimating a reward baseline and investigate empirically on benchmark problems and simulated robotic tasks to which extent this method can remedy the issue of non-comparable reward. | |
dc.identifier.pissn | 1610-1987 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/11484 | |
dc.publisher | Springer | |
dc.relation.ispartof | KI - Künstliche Intelligenz: Vol. 29, No. 4 | |
dc.relation.ispartofseries | KI - Künstliche Intelligenz | |
dc.subject | Active learning | |
dc.subject | Contextual policy search | |
dc.subject | Multi-task learning | |
dc.title | Accounting for Task-Difficulty in Active Multi-Task Robot Control Learning | |
dc.type | Text/Journal Article | |
gi.citation.endPage | 377 | |
gi.citation.startPage | 369 |