Research Area:  Machine Learning
With Artificial Intelligence advancement, Deep neural networks (DNN) are extensively used for decision-making in intelligent systems. However, improved performance and accuracy have been achieved through the increasing use of complex models, which makes it challenging for users to understand and trust. This ambiguous nature of these Deep machine learning models of high accuracy and low interpretability is problematic for their adoption in critical domains where it is vital to be able to explain the decisions made by the system. Explainable Artificial Intelligence has become an exciting field for explaining and interpreting machine learning models. Among the different data types used in machine learning, image data is considered hard to train because of the factors such as class, scale, viewpoint, and background variations. This paper aims to provide a rounded view of emerging methods to explain DNN models as a way to boost transparency in image-based deep learning with the analysis of the current and upcoming trends.
Keywords:  
Author(s) Name:  Lav Kumar Gupta, Deepika Koundal & Shweta Mongia
Journal name:  Archives of Computational Methods in Engineering
Conferrence name:  
Publisher name:  Springer
DOI:  10.1007/s11831-023-09881-5
Volume Information:  Volume 30, Pages 2651-2666, (2023)
Paper Link:   https://link.springer.com/article/10.1007/s11831-023-09881-5