Skip to content

XAI-Guided Continual Learning: Rationale, Methods, and Future Directions

Abstract

Providing neural networks with the ability to learn new tasks sequentially represents one of the main challenges in artificial intelligence. Unlike humans, neural networks are prone to losing previously acquired knowledge upon learning new information, a phenomenon known as catastrophic forgetting. Continual learning proposes diverse solutions to mitigate this problem, but only a few leverage explainable artificial intelligence. This work justifies using explainability techniques in continual learning, emphasizing the need for greater transparency and trustworthiness in these systems and grounding our approach in empirical findings from neuroscience that highlight parallels between forgetting in biological and artificial neural networks. Finally, we review existing work applying explainability methods to address catastrophic forgetting and propose potential avenues for future research.

View PDF

Authors

*External Authors

Venue

WIREs DMKD

Date

2025

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation