Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Research Proposal on Deep Reinforcement Learning Methods for Active Decision Making

Research Proposal on Deep Reinforcement Learning Methods for Active Decision Making

   In this era of the virtual world, Deep Reinforcement Learning (DRL) stands for the vision of creating autonomous systems that interact with their environments and are capable of learning complex policies in the real world, high dimensional environments. DRL is poised to revolutionize the field of artificial intelligence (AI) and utilizes neural network modeling in traditional Reinforcement Learning algorithms. Deep RL facilitates solving decision-making problems and making decisions in high-dimensional state space in an end-to-end framework and dramatically improves the generalization and scalability of traditional RL algorithms.

   Owing to DRL addressing cases that are analytically intractable using approximations and data-driven techniques, it has placed significant attention in recent years. Recently, there has been a surge of interest in DRL because of the several successful methods in DRL involving value-based, policy gradient, and model-based algorithms. The central algorithms in deep RL are the deep Q-network (DQN), trust region policy optimization (TRPO), and actor-critic algorithm.

Deep Q-Network (DQN):

   DQN learns rich domain representation and utilizing neural networks to model either a policy or a Q-function frees from constructing specific features and opens possibilities of applying RL algorithms to complex tasks. DQN breaks the fundamental instability problem of using a non-linear function approximator using two techniques such as experience replay and the target network.

Trust region policy optimization (TRPO):

   TRPO is a pure policy gradient algorithm for optimizing control policies with guaranteed monotonic improvement and is widely applicable to domains with high-dimensional inputs. It computes an ascent direction to improve on policy gradient, which can ensure a small change in the policy distribution.

Actor-Critic Algorithm:

   Actor-critic method gained more popularity in the last few years, especially in learning simulated physics tasks to real robotic visual navigation tasks directly from image pixels. This method is a sample-efficient policy gradient method, combining the advantages of both Monte Carlo policy gradient and value-based methods and the use of a replay buffer, enabling it to perform more than one gradient update using each piece of sampled experience, as well as a trust region policy optimization method.

Applications:

   Over the past years, Deep Reinforcement Learning has been proved to be a fruitful approach in several diverse domains and numerous real-life applications. Some of the stunning breakthrough achievements in DRL, including robotics, Natural Language Processing, Computer Vision, Healthcare, recommendation systems, and Intelligent Transportation Systems, especially, it has made significant progress in autonomous driving applications.

Robotics:

   Reinforcement learning employs a wide variety of physical systems and control tasks in robotics. Recently, DRL has gained more attention in robot control tasks in real situations. DRL enables a robot to autonomously discover behavioral control of complex robots through trial-and-error interactions in the simulation environment, thus enabling realistic responses to perturbations and environmental variation. Though the challenges of robotic problems as high-dimensional, continuous states and actions provide both inspiration, impact, and validation for developments in reinforcement learning.

Natural Language Processing:

   Several DRL methods combine embedding-based representation learning with reasoning, and optimize for a variety of non-differentiable rewards. These methods have been successfully applied in NLP because many NLP tasks are formulated as DRL problems that involve incremental decision-making. The applications of deep reinforcement learning in NLP include neural machine translation (NMT), dialog systems, semi-supervised text classification, knowledge graph reasoning, text games, information extraction, language and vision, and speech generation.

Autonomous Driving Applications:

   DRL offers autonomous driving tasks, including controller optimization, path planning, trajectory optimization, motion planning and dynamic path planning, development of high-level driving policies for complex navigation tasks, scenario-based policy learning for highways, intersections, merges and splits, reward learning with inverse reinforcement learning from expert data for intent prediction for traffic actors such as pedestrian, vehicles and finally learning of policies that ensures the safety and perform risk estimation.

Research Challenges in Deep Reinforcement Learning:

   Although DRL plays a wide range of successes in intelligent-based systems, there are still many challenges and open issues to be overcome and addressed. General challenges of developing an RL approach involve overfitting and instability, temporal assignment problem, environment model, exploration, exploitation, policy handling, convergence rate, learning speed, and many others. The following list presents some of the Research Challenges in this area.

   • Lack of addressing the hierarchical RL (HRL) after tackling exploration strategies imposes an inductive bias on the final policy by explicitly factorizing it into several levels.

   • Model-based RL needs dose not assume specific prior knowledge. However, in practice, one needs to incorporate prior knowledge to speed up learning

Future Directions in Deep Reinforcement Learning:

   Even though several algorithms have significantly affected various fields, algorithms with favorable improvement and convergence are needed.

   • A large number of samples are required to provide statistical guarantees in learning algorithms.

   • An intrinsic reward or some auxiliary task increases the exploration ability in the case of sparse rewards.

   • Learning complex skills requires considerable data collection, which requires the ability to keep the objects operational with minimal human intervention.