Archives

Explainable Deep Learning: Methods and Challenges


Onasoga Olukayode Ayodele,Nooraini Yusoff
Abstract

Deep Learning (DL) has been seen recently to evolve having worldwide attention due to its state-of-theart performance even exceeding the human level in a range of complex task. Such development is discovered in domains such as image classification, automatic speech recognition. Nevertheless, due to its nested non-linear internal complex structure, this highly successfully Machine Learning (ML) approach is treated as black box manner i.e. no explanation is provided on its decisions making process to which predictions are made in achieving such performance. With the rapid spread of DL across domains, these explanations are vital for critical decision-making process e.g. medical application. Thus, there is an urge to explaining how DL approaches make predictions. This paper presents a survey of various methods aimed at overcoming this critical setback of DL, its challenges and possible future research opportunities. We also present the insufficiency of current explainable approaches via classification of the approach problems and a plea for more explainable DL methods. The classification tends to present to the researcher proposals useful for opening DL models.

Volume 11 | 08-Special Issue

Pages: 1186-1205