Research Area:  Machine Learning
Safety is critical when applying reinforcement learning (RL) to real-world problems. As a result, safe RL has emerged as a fundamental and powerful paradigm for optimizing an agents policy while incorporating notions of safety. A prevalent safe RL approach is based on a constrained criterion, which seeks to maximize the expected cumulative reward subject to specific safety constraints. Despite recent effort to enhance safety in RL, a systematic understanding of the field remains difficult. This challenge stems from the diversity of constraint representations and little exploration of their interrelations. To bridge this knowledge gap, we present a comprehensive review of representative constraint formulations, along with a curated selection of algorithms designed specifically for each formulation. In addition, we elucidate the theoretical underpinnings that reveal the mathematical mutual relations among common problem formulations. We conclude with a discussion of the current state and future directions of safe reinforcement learning research.
Keywords:  
Author(s) Name:  Akifumi Wachi, Xun Shen, Yanan Sui
Journal name:  Artificial Intelligence
Conferrence name:  
Publisher name:  arXiv
DOI:  10.48550/arXiv.2402.02025
Volume Information:  Volume 221, (2024)
Paper Link:   https://arxiv.org/abs/2402.02025