A Discussion about Explainable Inference on Sequential Data via Memory-Tracking
Abstract
The recent explosion of deep learning techniques boosted the application of Artificial Intelligence in a variety of domains, thanks to their high performance. However, performance comes at the cost of interpretability: deep models contain hundred of nested non-linear operations that make it impossible to keep track of the chain of steps that bring to a given answer. In our recently published paper [10], we propose a method to improve the interpretability of a class of deep models, namely Memory Augmented Neural Networks (MANNs), when dealing with sequential data. Exploiting the capability of MANNs to store and access data in external memory, tracking the process, and connecting this information to the input sequence, our method extracts the most relevant sub-sequences that explain the answer. We evaluate our approach both on a modified T-maze [3, 25] and on the Story Cloze Test [15], obtaining promising results.
Venue
AIxIA 2021
Date
2021