Research Area:  Artificial Intelligence
In the last decade, with availability of large datasets and more computing power, machine learning systems have achieved (super)human performance in a wide variety of tasks. Examples of this rapid development can be seen in image recognition, speech analysis, strategic game planning and many more. The problem with many state-of-the-art models is a lack of transparency and interpretability. The lack of thereof is a major drawback in many applications, e.g. healthcare and finance, where rationale for models decision is a requirement for trust. In the light of these issues, explainable artificial intelligence (XAI) has become an area of interest in research community. This paper summarizes recent developments in XAI in supervised learning, starts a discussion on its connection with artificial general intelligence, and gives proposals for further research directions.
Keywords:  
Author(s) Name:  Filip Karlo Dosilovic; Mario Brcic; Nikica Hlupic
Journal name:  
Conferrence name:  International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)
Publisher name:  IEEE
DOI:  10.23919/MIPRO.2018.8400040
Volume Information:  
Paper Link:   https://ieeexplore.ieee.org/document/8400040