Research breakthrough possible @S-Logix pro@slogix.in

Office Address

Social List

Model-based Adversarial Meta-Reinforcement Learning - 2020

Model-Based Adversarial Meta-Reinforcement Learning

Research Area:  Machine Learning

Abstract:

Meta-reinforcement learning (meta-RL) aims to learn from multiple training tasks the ability to adapt efficiently to unseen test tasks. Despite the success, existing meta-RL algorithms are known to be sensitive to the task distribution shift. When the test task distribution is different from the training task distribution, the performance may degrade significantly. To address this issue, this paper proposes extit{Model-based Adversarial Meta-Reinforcement Learning} (AdMRL), where we aim to minimize the worst-case sub-optimality gap --- the difference between the optimal return and the return that the algorithm achieves after adaptation --- across all tasks in a family of tasks, with a model-based approach. We propose a minimax objective and optimize it by alternating between learning the dynamics model on a fixed task and finding the extit{adversarial} task for the current model --- the task for which the policy induced by the model is maximally suboptimal. Assuming the family of tasks is parameterized, we derive a formula for the gradient of the suboptimality with respect to the task parameters via the implicit function theorem, and show how the gradient estimator can be efficiently implemented by the conjugate gradient method and a novel use of the REINFORCE estimator. We evaluate our approach on several continuous control benchmarks and demonstrate its efficacy in the worst-case performance over all tasks, the generalization power to out-of-distribution tasks, and in training and test time sample efficiency, over existing state-of-the-art meta-RL algorithms.

Keywords:  

Author(s) Name:  Zichuan Lin, Garrett Thomas, Guangwen Yang, Tengyu Ma

Journal name:  Advances in Neural Information Processing Systems

Conferrence name:  

Publisher name:  arXiv

DOI:  https://doi.org/10.48550/arXiv.2006.08875

Volume Information: