Research Area:  Machine Learning
Deep learning algorithms have achieved high performance accuracy in complex domains such as image classification, face recognition sentiment analysis, text classification, and speech understanding. Due to the nested non-linear structure of deep learning algorithms, these highly successful models are usually applied in a black-box manner, i.e., no information is provided about what exactly causes them to arrive at their predictions. The effectiveness of these systems is thus limited by the machine’s current inability to explain its decisions and actions to human users. Such a lack of transparency can be a major drawback. For instance, in medical applications the development of methods for visualizing, explaining, and interpreting deep learning models has recently attracted increasing attention. Explainable Machine Learning (XAI) or Interpretable Machine Learning (IML) programs aim to create a suite of machine learning techniques that produce more explainable models while maintaining the high level of accuracy. It also enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. Our work summarizes recent developments in this field, and we present three classification tasks wherein we make use of LIME (Local Interpretable Model-Agnostic Explanations) to explain predictions of deep learning models. LIME attempts to make these complex models at least partly understandable by evaluating using three classification tasks. Our first evaluation is performed on natural language processing application for tweet data classification and our second evaluation is on biomedical signal classification cancer detection. Our third evaluation is on Windows PC malware classification in a cybersecurity domain. In all three case studies, the interpreted results are presented in terms of visualization.
Author(s) Name:  Sherin Mary Mathews
Conferrence name:  Intelligent Computing - Proceedings of the Computing Conference
Publisher name:  Springer
Paper Link:   https://link.springer.com/chapter/10.1007/978-3-030-22868-2_90