Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Research Topics in Hierarchical Reinforcement Learning

Research Topics in Hierarchical Reinforcement Learning

  Hierarchical Reinforcement Learning (HRL) disintegrates reinforcement learning problems into the order of sub-problems that range from high-level tasks to low-level tasks with multiple orders. HRL plays a vital role in scaling the reinforcement methods with large, complex, real-world problems. Difficulties such as sparse rewards, long task horizon, the requirement of intricate skills in reinforcement learning are overcome by hierarchical reinforcement learning. The significance of hierarchical reinforcement learning is the reduction in computation complexity.

  Approaches of Hierarchical Reinforcement Learning are Hierarchies of Abstract Machines (HAMs), MAXQ value Function Decomposition, and Options. Practical applications of hierarchical reinforcement learning are disease diagnosis, industrial robotics, and fleet management for ride-hailing platforms. Recent advances of hierarchical reinforcement learning in Semi Markov Decision Process such as concurrent activities, multi-agent domains, and partially observable states. Learning task hierarchies, compact representation, and dynamic abstraction are the future research scopes of HRL.

  • Conventional reinforcement learning is bottlenecked by the curse of dimensionality for many practical applications.

  • Hierarchical Reinforcement Learning (HRL) is an elegant solution that directly addresses the bottlenecks of traditional reinforcement learning because it enables autonomous decomposition of challenging long-horizon decision-making tasks into a hierarchy of sub-tasks.

  • Based on the hierarchy, HRL learns a higher-level policy to perform the task by choosing optimal sub-tasks as the higher-level actions.

  • HRL exploits temporal and state abstraction to reduce sequential decision-making dimensionality.

  • HRL addresses the diversity of challenges such as learning the policies in a hierarchy, the autonomous discovery of sub-tasks, transfer learning, and multi-agent learning using HRL.

  • HRL intuitively resembles how humans solve complex tasks and facilitate resolving large-scale and complex domain problems in real-life scenarios.