Venue
- AIxIA 2021
Date
- 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa*
Biagio La Rosa*
* External authors
AIxIA 2021
2021
Abstract
The recent success of deep learning models in solving complex problems and in different domains has increased interest in understanding what they learn. Therefore, different approaches have been employed to explain these models, one of which uses human-understandable concepts as explanations. Two examples of methods that use this approach are Network Dissection and Compositional explanations. The former explains units using atomic concepts, while the latter makes explanations more expressive, replacing atomic concepts with logical forms. While intuitively, logical forms are more informative than atomic concepts, it is not clear how to quantify this improvement, and their evaluation is often based on the same metric that is optimized during the search-process and on the usage of hyper-parameters to be tuned. In this paper, we propose to use as evaluation metric the Detection Accuracy, which measures units' consistency of detection of their assigned explanations. We show that this metric (1) evaluates explanations of different lengths effectively, (2) can be used as a stopping criterion for the compositional explanation search, eliminating the explanation length hyper-parameter, and (3) exposes new specialized units whose length 1 explanations are the perceptual abstractions of their longer explanations.
Related Publications
Recent advances in protein-protein interaction (PPI) research have harnessed the power of artificialintelligence (AI) to enhance our understanding of protein behaviour. These approaches have becomeindispensable tools in the field of biology and medicine, enabling scientists …
Non-markovian Reinforcement Learning (RL) tasks arevery hard to solve, because agents must consider the entire history ofstate-action pairs to act rationally in the environment. Most works usesymbolic formalisms (as Linear Temporal Logic or automata) to specify the temporall…
Explainable AI seeks to unveil the intricacies of black box models through post-hoc strategies or self-interpretable models. In this paper, we tackle the problem of building layers that are intrinsically explainable through logical rules. In particular, we address current st…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.