Research Area:  Machine Learning
With the broader and highly successful usage of machine learning (ML) in industry and the sciences, there has been a growing demand for explainable artificial intelligence (XAI). Interpretability and explanation methods for gaining a better understanding of the problem-solving abilities and strategies of nonlinear ML, in particular, deep neural networks, are, therefore, receiving increased attention. In this work, we aim to: 1) provide a timely overview of this active emerging field, with a focus on post hoc explanations, and explain its theoretical foundations; 2) put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations; 3) outline best practice aspects, i.e., how to best include interpretation methods into the standard usage of ML; and 4) demonstrate successful usage of XAI in a representative selection of application scenarios. Finally, we discuss challenges and possible future directions of this exciting foundational field of ML.
Keywords:  
Explaining Deep Neural Networks
Deep Learning
Machine Learning
Author(s) Name:  Wojciech Samek; Grégoire Montavon; Sebastian Lapuschkin; Christopher J. Anders; Klaus-Robert Müller
Journal name:   Proceedings of the IEEE
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/JPROC.2021.3060483
Volume Information:  Volume: 109, Issue: 3, March 2021, Page(s): 247 - 278
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9369420