TLDR: We "unit test" several popular feature attribution algorithms for CV and NLP models to see if they can identify features known to be highly important to model predictions; (un)surprisingly, they mostly can't.
Abstract: Feature attribution methods are popular in interpretable machine learning. These methods compute the attribution of each input feature to represent its importance, but there is no consensus on the definition of "attribution", leading to many competing methods with little systematic evaluation, complicated in particular by the lack of ground truth attribution. To address this, we propose a dataset modification procedure to induce such ground truth. Using this procedure, we evaluate three common methods: saliency maps, rationales, and attentions. We identify several deficiencies and add new perspectives to the growing body of evidence questioning the correctness and reliability of these methods applied on datasets in the wild. We further discuss possible avenues for remedy and recommend new attribution methods to be tested against ground truth before deployment.
What's Next: Don't stop at local explanations. Ensuring that high-quality model understanding is derived from these local explanations is equally important. Check out our ExSum framework for more details on this aspect.