A Meta Survey of Quality Evaluation Criteria in Explanation Methods
2022 (English)In: Intelligent Information Systems. CAiSE 2022. Lecture Notes in Business Information Processing / [ed] De Weerdt, J., Polyvyanyy, A., 2022, p. 55-63Conference paper, Published paper (Refereed)
Abstract [en]
The evaluation of explanation methods has become a significant issue in explainable artificial intelligence (XAI) due to the recent surge of opaque AI models in decision support systems (DSS). Explanations are essential for bias detection and control of uncertainty since most accurate AI models are opaque with low transparency and comprehensibility. There are numerous criteria to choose from when evaluating explanation method quality. However, since existing criteria focus on evaluating single explanation methods, it is not obvious how to compare the quality of different methods.
In this paper, we have conducted a semi-systematic meta-survey over fifteen literature surveys covering the evaluation of explainability to identify existing criteria usable for comparative evaluations of explanation methods.
The main contribution in the paper is the suggestion to use appropriate trust as a criterion to measure the outcome of the subjective evaluation criteria and consequently make comparative evaluations possible. We also present a model of explanation quality aspects. In the model, criteria with similar definitions are grouped and related to three identified aspects of quality; model, explanation, and user. We also notice four commonly accepted criteria (groups) in the literature, covering all aspects of explanation quality: Performance, appropriate trust, explanation satisfaction, and fidelity. We suggest the model be used as a chart for comparative evaluations to create more generalisable research in explanation quality.
Place, publisher, year, edition, pages
2022. p. 55-63
Series
Lecture Notes in Business Information Processing ; 452
Keywords [en]
Explanation method, Evaluation metric, Explainable artificial intelligence, Evaluation of explainability, Comparative evaluations
National Category
Computer and Information Sciences
Research subject
Business and IT
Identifiers
URN: urn:nbn:se:hb:diva-28885DOI: 10.1007/978-3-031-07481-3_7ISI: 000871754800007Scopus ID: 2-s2.0-85131293203ISBN: 978-3-031-07480-6 (print)ISBN: 978-3-031-07481-3 (electronic)OAI: oai:DiVA.org:hb-28885DiVA, id: diva2:1709057
Conference
34th International Conference on Advanced Information Systems Engineering (CAiSE), Leuven, Belgium, 6-10 June, 2022.
Note
This research is partly founded by the Swedish Knowledge Foundation through the Industrial Research School INSiDR.
2022-11-072022-11-072023-11-27