TLDR: Useful local explanations need to be both correct (or faithful) and easily understandable. While definitions and evaluations of correctness have received much attention, those of understandability are mostly overlooked. We present ExSum, the first framework to formally quantify and evaluate the understandability of local explanations. Experimental results demonstrate that we indeed stand to benefit from such a careful analysis.
Try It Yourself: We provide an easy-to-use package for you to play with the ExSum experiments described in the paper and apply it on your own models and datasets. Just pip install exsum
and follow the quickstart steps in the documentation to get started.
Abstract: Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.