Authors

* External authors

Venue

Date

Share

Detection Accuracy for Evaluating Compositional Explanations of Units

Sayo M. Makinwa*

Biagio La Rosa*

Roberto Capobianco

* External authors

AIxIA 2021

2021

Abstract

The recent success of deep learning models in solving complex problems and in different domains has increased interest in understanding what they learn. Therefore, different approaches have been employed to explain these models, one of which uses human-understandable concepts as explanations. Two examples of methods that use this approach are Network Dissection and Compositional explanations. The former explains units using atomic concepts, while the latter makes explanations more expressive, replacing atomic concepts with logical forms. While intuitively, logical forms are more informative than atomic concepts, it is not clear how to quantify this improvement, and their evaluation is often based on the same metric that is optimized during the search-process and on the usage of hyper-parameters to be tuned. In this paper, we propose to use as evaluation metric the Detection Accuracy, which measures units' consistency of detection of their assigned explanations. We show that this metric (1) evaluates explanations of different lengths effectively, (2) can be used as a stopping criterion for the compositional explanation search, eliminating the explanation length hyper-parameter, and (3) exposes new specialized units whose length 1 explanations are the perceptual abstractions of their longer explanations.

Related Publications

Towards a fuller understanding of neurons with Clustered Compositional Explanations

NeurIPS, 2023
Biagio La Rosa*, Leilani H. Gilpin*, Roberto Capobianco

Compositional Explanations is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations used to check the alignment (i.e., the highest ones), thus lacking c…

Memory Replay For Continual Learning With Spiking Neural Networks

IEEE MSLP, 2023
Michela Proietti*, Alessio Ragno*, Roberto Capobianco

Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-performing deep learning models rises as they become more …

Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whi…

Machine Learning, 2023
Michela Proietti*, Alessio Ragno*, Biagio La Rosa*, Rino Ragno*, Roberto Capobianco

Molecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not t…

  • HOME
  • Publications
  • Detection Accuracy for Evaluating Compositional Explanations of Units

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.