Authors
- Michela Proietti*
- Alessio Ragno*
- Biagio La Rosa*
- Rino Ragno*
- Roberto Capobianco
* External authors
Venue
- Machine Learning
Date
- 2023
Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening
Michela Proietti*
Alessio Ragno*
Biagio La Rosa*
Rino Ragno*
* External authors
Machine Learning
2023
Abstract
Molecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not transparent. In this work, we adapt concept whitening to graph neural networks. This approach is an explainability method used to build an inherently interpretable model, which allows identifying the concepts and consequently the structural parts of the molecules that are relevant for the output predictions. We test popular models on several benchmark datasets from MoleculeNet. Starting from previous work, we identify the most significant molecular properties to be used as concepts to perform classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we provide several structural and conceptual explanations for the predictions.
Related Publications
Recent advances in protein-protein interaction (PPI) research have harnessed the power of artificialintelligence (AI) to enhance our understanding of protein behaviour. These approaches have becomeindispensable tools in the field of biology and medicine, enabling scientists …
Non-markovian Reinforcement Learning (RL) tasks arevery hard to solve, because agents must consider the entire history ofstate-action pairs to act rationally in the environment. Most works usesymbolic formalisms (as Linear Temporal Logic or automata) to specify the temporall…
Explainable AI seeks to unveil the intricacies of black box models through post-hoc strategies or self-interpretable models. In this paper, we tackle the problem of building layers that are intrinsically explainable through logical rules. In particular, we address current st…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.