Paper abstract

Transferring Instances for Model-Based Reinforcement Learning

Matthew E. Taylor - University of Texas at Austin, USA
Nicholas K. Jong - University of Texas at Austin, USA
Peter Stone - University of Texas at Austin, USA

Session: Reinforcement Learning 1
Springer Link: http://dx.doi.org/10.1007/978-3-540-87481-2_32

Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces TIMBREL, a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that TIMBREL can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of TIMBREL's effectiveness.