Venue
- EXPLIMED-24
Date
- 2025
Identifying Candidates for Protein-Protein Interaction: A Focus on NKp46’s Ligands
Alessia Borghini
Federico Di Valerio
Alessio Ragno*
* External authors
EXPLIMED-24
2025
Abstract
Recent advances in protein-protein interaction (PPI) research have harnessed the power of artificial
intelligence (AI) to enhance our understanding of protein behaviour. These approaches have become
indispensable tools in the field of biology and medicine, enabling scientists to uncover hidden connections
and predict novel interactions. The experimental processes to analyze and validate the interactions
between proteins are usually expensive and time-consuming and with this work, we can reduce these
costs by strategically filtering and computationally validating the possible proteins which might take
part in the interactions at hand. Aiming at helping in broadening the repertoire of known interacting
proteins, we present a method for the systematic screening of proteins that exhibit a high affinity for
the interaction with a chosen protein. Specifically, building upon already known protein interactions,
we exploit the self-explainability of the deep learning model DSCRIPT to search and find promising
protein candidates for a determined PPI. We analyze and rank the candidates using various strategies,
and then employ AlphaFold2 to validate the resulting interactions. Consequently, we compare our AIdriven methodology with traditional bioinformatics approaches commonly used to find potential protein
candidates. Throughout the overall process, explanatory data is obtained, among which is an informative
contact map that elucidates the potential interaction between a protein of the known interaction and
the predicted proteins. As a case study, we apply our method to deepen our understanding of NKp46’s
ligands repertoire, which is yet not fully uncovered.
Related Publications
Non-markovian Reinforcement Learning (RL) tasks arevery hard to solve, because agents must consider the entire history ofstate-action pairs to act rationally in the environment. Most works usesymbolic formalisms (as Linear Temporal Logic or automata) to specify the temporall…
Explainable AI seeks to unveil the intricacies of black box models through post-hoc strategies or self-interpretable models. In this paper, we tackle the problem of building layers that are intrinsically explainable through logical rules. In particular, we address current st…
In this work, we introduce DeepDFA, a novel approach to identifying Deterministic Finite Automata (DFAs) from traces, harnessing a differentiable yet discrete model. Inspired by both the probabilistic relaxation of DFAs and Recurrent Neural Networks (RNNs), our model offers …
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.