Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

Top PhD Thesis on Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

Research Area:  Machine Learning

Abstract:

Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. ANNs have a tendency to get trapped at sub-optimal solutions (local optimum) , and therefore trial and error is commonly used to select the network topology and train the network, which is prohibitively time consuming and costly. Recent advances in our understanding of the biological brain, hardware, algorithms, and potential for novel applications renewed interest in ANNs. Evolutionary Artificial Neural Networks (EANN) are among the more successful paradigms explored to improve ANNs performance. EANNs employ evolutionary computation techniques such as Genetic Algorithms (GA) or Particle Swarm Optimization (PSO) to train ANNs, or to generate ANNs topologies. Still, these improvements are not consistent and usually problem-specific. ANN performance depends in part on the number of neurons in hidden layer(s). The more neurons in hidden layer(s) the better the networks ability to recognize specific samples it had seen during the training phase; however, the network becomes incapable of learning general patterns and recognizing these patterns in novel data. Performance on training data improves with training, while performance on testing data (samples the network had not seen previously) degrades (overfitting). This work rigorously investigated using PSO to pre-train ANNs with varying number of neurons in the hidden layer.

Name of the Researcher:  Dmitry O Nikelshpur

Name of the Supervisor(s):  Charles C. Tappert

Year of Completion:  2014

University:  Pace University

Thesis Link:   Home Page Url