ExSum: From Local Explanations to Model Understanding

Yilun Zhou1, Marco Tulio Ribeiro2, Julie Shah1
1 MIT CSAIL, 2 Microsoft Research
[Paper]    [GitHub Repo]    [Package Documentation]
NAACL 2022

TLDR: Useful local explanations need to be both correct (or faithful) and easily understandable. While definitions and evaluations of correctness have received much attention, those of understandability are mostly overlooked. We present ExSum, the first framework to formally quantify and evaluate the understandability of local explanations. Experimental results demonstrate that we indeed stand to benefit from such a careful analysis.

Try It Yourself: We provide an easy-to-use package for you to play with the ExSum experiments described in the paper and apply it on your own models and datasets. Just pip install exsum and follow the quickstart steps in the documentation to get started.

Abstract: Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.

    title = {ExSum: From Local Explanations to Model Understanding},
    author = {Zhou, Yilun and Ribeiro, Marco Tulio and Shah, Julie},
    booktitle = {Annual Conference of the North American Chapter of the Association for Computational Linguistics},
    year = {2022},
    month = {July},
    publisher = {Association for Computational Linguistics}


Yilun Zhou
PhD Student

Marco Tulio Ribeiro
Senior Researcher
Microsoft Research
This research is supported by the National Science Foundation (NSF) under the grant IIS-1830282.