Main Reference PaperSigmoid-weighted linear units for neural network function approximation in reinforcement learning, Neural Networks, 2018 [Python]
  • The proposed activation function referred to sigmoid-weighted linear (SiL) unit and its derivative function (dSiL) to approximate the neural network function in the reinforcement learning. It assists in enhancing the performance of the system.

Description
  • The proposed activation function referred to sigmoid-weighted linear (SiL) unit and its derivative function (dSiL) to approximate the neural network function in the reinforcement learning. It assists in enhancing the performance of the system.

  • To extend the activation function

  • To control the curvature of the neural network function

Aim & Objectives
  • To extend the activation function

  • To control the curvature of the neural network function

  • To develop the system by fine-tuning the basic parameters of reinforcement learning to improve the accuracy further.

Contribution
  • To develop the system by fine-tuning the basic parameters of reinforcement learning to improve the accuracy further.

  • M.E / M.Tech/ MS / Ph.D.- Customized according to the client requirements.

Project Recommended For
  • M.E / M.Tech/ MS / Ph.D.- Customized according to the client requirements.

  • No Readymade Projects-project delivery Depending on the complexity of the project and requirements.

Order To Delivery
  • No Readymade Projects-project delivery Depending on the complexity of the project and requirements.

Professional Ethics: We S-Logix would appreciate the students those who willingly contribute with atleast a line of thinking of their own while preparing the project with us. It is advised that the project given by us be considered only as a model project and be applied with confidence to contribute your own ideas through our expert guidance and enrich your knowledge.

Leave Comment

Your email address will not be published. Required fields are marked *

clear formSubmit