Authors

* External authors

Venue

Date

Share

Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening

Michela Proietti*

Alessio Ragno*

Biagio La Rosa*

Rino Ragno*

Roberto Capobianco

* External authors

Machine Learning

2023

Abstract

Molecular property prediction is a fundamental task in the field of drug discovery. Several works use graph neural networks to leverage molecular graph representations. Although they have been successfully applied in a variety of applications, their decision process is not transparent. In this work, we adapt concept whitening to graph neural networks. This approach is an explainability method used to build an inherently interpretable model, which allows identifying the concepts and consequently the structural parts of the molecules that are relevant for the output predictions. We test popular models on several benchmark datasets from MoleculeNet. Starting from previous work, we identify the most significant molecular properties to be used as concepts to perform classification. We show that the addition of concept whitening layers brings an improvement in both classification performance and interpretability. Finally, we provide several structural and conceptual explanations for the predictions.

Related Publications

Towards a fuller understanding of neurons with Clustered Compositional Explanations

NeurIPS, 2023
Biagio La Rosa*, Leilani H. Gilpin*, Roberto Capobianco

Compositional Explanations is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations used to check the alignment (i.e., the highest ones), thus lacking c…

Memory Replay For Continual Learning With Spiking Neural Networks

IEEE MSLP, 2023
Michela Proietti*, Alessio Ragno*, Roberto Capobianco

Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-performing deep learning models rises as they become more …

Understanding Deep RL agent decisions: a novel interpretable approach with trainable prototypes

AIxIA, 2023
Caterina Borzillo*, Alessio Ragno*, Roberto Capobianco

Deep reinforcement learning (DRL) models have shown great promise in various applications, but their practical adoption in critical domains is limited due to their opaque decision-making processes. To address this challenge, explainable AI (XAI) techniques aim to enhance tra…

  • HOME
  • Publications
  • Explainable AI in drug discovery: self-interpretable graph neural network for molecular property prediction using concept whitening

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.