Pilar von Pilchau, WenzelDraude, ClaudeLange, MartinSick, Bernhard2019-08-272019-08-272019978-3-88579-689-3https://dl.gi.de/handle/20.500.12116/25089Reinforcement learning and especially deep reinforcement learning are research areas which are getting more and more attention. The mathematical method of interpolation is used to get information of data points in an area where only neighboring samples are known and thus seems like a good expansion for the experience replay which is a major component of a variety of deep reinforcement learning methods. Interpolated experiences stored in the experience replay could speed up learning in the early phase and reduce the overall amount of exploration needed. A first approach of averaging rewards in a setting with unstable transition function and very low exploration is implemented and shows promising results that encourage further investigation.enExperience ReplayDeep Q-NetworkDeep Reinforcement LearningInterpolationMachine LearningOrganic ComputingAveraging rewards as a first approach towards Interpolated Experience ReplayText/Conference Paper10.18420/inf2019_ws531617-5468