Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis - IRT Saint Exupéry - Institut de Recherche Technologique Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis

Résumé

We describe a novel attribution method which is grounded in Sensitivity Analysis and uses Sobol indices. Beyond modeling the individual contributions of image regions, Sobol indices provide an efficient way to capture higher-order interactions between image regions and their contributions to a neural network's prediction through the lens of variance. We describe an approach that makes the computation of these indices efficient for high-dimensional problems by using perturbation masks coupled with efficient estimators to handle the high dimensionality of images. Importantly, we show that the proposed method leads to favorable scores on standard benchmarks for vision (and language models) while drastically reducing the computing time compared to other black-box methods-even surpassing the accuracy of state-of-the-art white-box methods which require access to internal representations. Our code is freely available: github.com/fel-thomas/Sobol-Attribution-Method.
Fichier principal
Vignette du fichier
2020_12_XAI_Sobol__NeurIPS_2021___Copy_.pdf (11.77 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03473083 , version 1 (09-12-2021)

Identifiants

  • HAL Id : hal-03473083 , version 1

Citer

Thomas Fel, Rémi Cadène, Mathieu Chalvidal, Matthieu Cord, David Vigouroux, et al.. Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis. Conference on Neural Information Processing Systems (NeurIPS), Dec 2022, Sydney, Australia. ⟨hal-03473083⟩
104 Consultations
22 Téléchargements

Partager

Gmail Facebook X LinkedIn More