Research Area:  Machine Learning
This work studies reinforcement learning (RL) in the context of multi-period supply chains subject to constraints, e.g., on inventory. We introduce Distributional Constrained Policy Optimization (DCPO), a novel approach for reliable constraint satisfaction in RL. Our approach is based on Constrained Policy Optimization (CPO), which is subject to approximation errors that in practice lead it to converge to infeasible policies. We address this issue by incorporating aspects of distributional RL. Using a supply chain case study, we show that DCPO improves the rate at which the RL policy converges and ensures reliable constraint satisfaction by the end of training. The proposed method also greatly reduces the variance of returns between runs; this result is significant in the context of policy gradient methods, which intrinsically introduce high variance during training.
Keywords:  
Author(s) Name:  Jaime Sabal Bermúdez, Antonio del Rio Chanona, Calvin Tsay
Journal name:  Computer Aided Chemical Engineering
Conferrence name:  
Publisher name:  ScienceDirect
DOI:  10.1016/B978-0-443-15274-0.50262-6
Volume Information:  Volume 52, (2023)
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/B9780443152740502626