Venue
- IEEE TAI
Date
- 2022
Prototype-based Interpretable Graph Neural Networks
Alessio Ragno*
Biagio La Rosa*
* External authors
IEEE TAI
2022
Abstract
Graph neural networks have proved to be a key tool for dealing with many problems and domains such as chemistry, natural language processing and social networks. While the structure of the layers is simple, it is difficult to identify the patterns learned by the graph neural network. Several works propose post-hoc methods to explain graph predictions, but few of them try to generate interpretable models. Conversely, the topic of the interpretable models is highly investigated in image recognition. Given the similarity between image and graph domains, we analyze the adaptability of prototype-based neural networks for graph and node classification. In particular, we investigate the use of two interpretable networks, ProtoPNet and TesNet, in the graph domain. We show that the adapted networks manage to reach better or higher accuracy scores than their respective black-box models and comparable performances with state-of-the-art self-explainable models. Showing how to extract ProtoPNet and TesNet explanations on graph neural networks, we further study how to obtain global and local explanations for the trained models. We then evaluate the explanations of the interpretable models by comparing them with post-hoc approaches and self-explainable models. Our findings show that the application of TesNet and ProtoPNet to the graph domain produces qualitative predictions while improving their reliability and transparency
Related Publications
Recent advances in protein-protein interaction (PPI) research have harnessed the power of artificialintelligence (AI) to enhance our understanding of protein behaviour. These approaches have becomeindispensable tools in the field of biology and medicine, enabling scientists …
Non-markovian Reinforcement Learning (RL) tasks arevery hard to solve, because agents must consider the entire history ofstate-action pairs to act rationally in the environment. Most works usesymbolic formalisms (as Linear Temporal Logic or automata) to specify the temporall…
Explainable AI seeks to unveil the intricacies of black box models through post-hoc strategies or self-interpretable models. In this paper, we tackle the problem of building layers that are intrinsically explainable through logical rules. In particular, we address current st…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.