The explosive growth of data and remarkable development in hardware technologies have intended to the emergence of deep learning [1]. Deep learning refers to a sub-field of machine learning techniques that seek to learn several levels of representation and abstraction that makes sense of data like text, sound, and image. Deep learning has been consistently recognized as a potential solution to the stumbling block of machine learning. The deep learning techniques accomplish the feature extraction in an automated manner, which enables scientific experts to capture the discriminating features with minimal human effort and domain knowledge [2]. Thereby, the deep learning algorithm has perceived as a highly promising decision-making algorithm. Notably, deep learning performs well in the autonomous driving task.

This progress is primarily due to the various architectures of deep learning includes Convolutional Neural Networks (CNNs), Recurrent Reural Networks (RNNs), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Deep Belief Networks (DBN), Deep Stacking Networks (DSNs) and so on [3]. The deep learning solution brings outstanding results in disparate real-time applications such as Natural Language Processing (NLP), speech recognition, image recognition, and computer vision. With the effective participation of image and audio processing feature, the deep learning methods emerged and are contributing in many fields include automotive, medical applications, military, education, and surveillance fields [4]..

Deep learning algorithms emphasize more benefits and applicability of the disparate recognition system. Despite that, many domains in practice are still untouched by deep architecture owing to the inefficiency of data to the general public. Also, the imperfect, heterogeneous, large-scale, low-sample dataset, uncertainty, and unlabeled data are open challenges for deep learning techniques. To resolve the real world issues, the deep learning entails a lot of power and time. Furthermore, the operation of deep learning methods is highly invisible to people, rendering them inappropriate for the domain in which the verification process is essential.

The deep learning algorithm helps to yield the incredible results in several research areas. However, recently the deep learning models severely suffer from the issue of overfitting that misleads the classification process of untrained cases. One of the significant obstacles in the deep learning models is that it requires the human experts and knowledge for selecting the values for the hyper-parameters such as the strength of the regularizer, number of hidden layers, number of hidden units, learning rate schedule, and so on. This hyper-parameter tuning in the deep learning model is the most expensive. Another challenge to be confronted by the deep learning model is that dimensionality reduction without losing the potential information required for the classification task that induces the complexity in the prediction.

## Reference:

[1] Hatcher, William Grant, and Wei Yu, “A survey of deep learning: platforms, applications, and emerging research trends.” IEEE Access, Vol.6, pp.24411-24432, 2018.

[2] Pouyanfar, Samira, Saad Sadiq, Yilin Yan, Haiman Tian, Yudong Tao, Maria Presa Reyes, Mei-Ling Shyu, Shu-Ching Chen, and S. S. Iyengar, ”A Survey on Deep Learning: Algorithms, Techniques, and Applications”, ACM Computing Surveys (CSUR), Vol51, No. 5, pp.92, 2018.

[3] Patel, Sanskruti, and Atul Patel, “Deep Leaning Architectures and its Applications: A Survey”, 2018.

[4] Deng, Li, and Dong Yu, “Deep learning: methods and applications”, Foundations and Trends® in Signal Processing, Vol.7, No.3–4, pp.197-387, 2014.

Leave Comment