Research Area:  Machine Learning
Just like humans,robots can improve their performance by practicing, i. e. by performing the desired behavior many times and updating the underlying skill representation using the newly gathered data. In this paper, we propose to implement robot practicing by applying statistical and reinforcement learning (RL) in a latent space of the selected skill representation. The latent space is computed by a deep autoencoder neural network, with the data to train the network generated in simulation. However, we show that the resulting latent space representation is useful also for learning on a real robot.Our simulation and real-world results demonstrate that by exploiting the latent space of the underlying motor skill representation, a significant reduction of the amount of data needed for effective learning by Gaussian Process Regression (GPR) can be achieved. Similarly, the number of RL epochs can be significantly reduced. Finally, it is evident from our results that an autoencoder-based latent space is more effective for these purposes than a latent space computed by principal component analysis.
Author(s) Name:  Rok Pahič,Zvezdan Lončarević,Andrej Gams,Aleš Ude
Journal name:  Robotics and Autonomous Systems
Publisher name:  Elsevier
Volume Information:  Volume 135, January 2021, 103690
Paper Link:   https://www.sciencedirect.com/science/article/pii/S0921889020305303