Do Feature Attribution Methods Correctly Attribute Features?

Yilun Zhou1, Serena Booth1, Marco Tulio Ribeiro2, Julie Shah1
1 MIT CSAIL, 2 Microsoft Research
[Full Paper]    [GitHub Repo]
AAAI 2022

TLDR: We "unit test" several popular feature attribution algorithms for CV and NLP models to see if they can identify features known to be highly important to model predictions; (un)surprisingly, they mostly can't.

Abstract: Feature attribution methods are popular in interpretable machine learning. These methods compute the attribution of each input feature to represent its importance, but there is no consensus on the definition of "attribution", leading to many competing methods with little systematic evaluation, complicated in particular by the lack of ground truth attribution. To address this, we propose a dataset modification procedure to induce such ground truth. Using this procedure, we evaluate three common methods: saliency maps, rationales, and attentions. We identify several deficiencies and add new perspectives to the growing body of evidence questioning the correctness and reliability of these methods applied on datasets in the wild. We further discuss possible avenues for remedy and recommend new attribution methods to be tested against ground truth before deployment.

What's Next: Don't stop at local explanations. Ensuring that high-quality model understanding is derived from these local explanations is equally important. Check out our ExSum framework for more details on this aspect.

    title = {Do Feature Attribution Methods Correctly Attribute Features?},
    author = {Zhou, Yilun and Booth, Serena and Ribeiro, Marco Tulio and Shah, Julie},
    booktitle = {Proceedings of the 36th AAAI Conference on Artificial Intelligence},
    year = {2022},
    month = {Feb},
    publisher = {AAAI}


Yilun Zhou
PhD Student
Marco Tulio Ribeiro
Senior Researcher
Microsoft Research
This research is supported by the National Science Foundation (NSF) under the grant IIS-1830282.