You can find below our recently accepted work in IEEE Access, entitled “Multimodal Explainable Artificial Intelligence: A Comprehensive Review of Methodological Advances and Future Research Directions” and authored by N. Rodis, C. Sardianos, P. Radoglou-Grammatikis, P. Sarigiannidis, I. Varlamis and Georgios Th. Papadopoulos.
The current study focuses on systematically analyzing the recent advances in the area of Multimodal eXplainable Artificial ntelligence (MXAI), which comprises methods that involve multiple modalities in the primary prediction and explanation tasks. In particular, the relevant AI-boosted prediction tasks and publicly available datasets used for learning/evaluating explanations in multimodal scenarios are initially described. Subsequently, a systematic and comprehensive analysis of the MXAI methods of the literature is provided, taking into account the following key criteria:
- The number of the involved modalities (in the employed AI module)
- The processing stage at which explanations are generated
- The type of the adopted methodology (i.e. the actual mechanism and mathematical formalization) for producing explanations
Then, a thorough analysis of the metrics used for MXAI methods’ evaluation is performed. Finally, an extensive discussion regarding the current challenges and future research directions is provided.
https://ieeexplore.ieee.org/document/10689601/authors#authors