Authors

Venue

Date

Share

FRUNI and FTREE synthetic knowledge graphs for evaluating explainability

Pablo Sanchez Martin

Tarek Besold

Priyadarshini Kumari

NeurIPS 2023

2023

Abstract

Research on knowledge graph completion (KGC)---i.e., link prediction within incomplete KGs---is witnessing significant growth in popularity. Recently, KGC using KG embedding (KGE) models, primarily based on complex architectures (e.g., transformers), have achieved remarkable performance. Still, extracting the \emph{minimal and relevant} information employed by KGE models to make predictions, while constituting a major part of \emph{explaining the predictions}, remains a challenge. While there exists a growing literature on explainers for trained KGE models, systematically exposing and quantifying their failure cases poses even greater challenges. In this work, we introduce two synthetic datasets, FRUNI and FTREE, designed to demonstrate the (in)ability of explainer methods to spot link predictions that rely on indirectly connected links. Notably, we empower practitioners to control various aspects of the datasets, such as noise levels and dataset size, enabling them to assess the performance of explainability methods across diverse scenarios. Through our experiments, we assess the performance of four recent explainers in providing accurate explanations for predictions on the proposed datasets. We believe that these datasets are valuable resources for further validating explainability methods within the knowledge graph community.

Related Publications

What's Wrong with Gradient-based Complex Query Answering?

NeSy, 2023
Ouns El Harzli, Samy Badreddine, Tarek Besold

Multi-hop query answering on knowledge graphs is known to be a challenging computational task. Neurosymbolic approaches using neural link predictors have shown promising results but are still outperformed by combinatorial optimization methods on several benchmarks, including…

Machine Learning Security in Industry: A Quantitative Survey

IEEE Transactions on Information Forensics and Security, 2023
L. Bieringer*, K. Grosse*, Tarek Besold, B. Biggio*, K. Krombholz*

Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and…

Enhancing Haptic Distinguishability of Surface Materials with Boosting Technique

IEEE Haptics Symposium, 2022
Priyadarshini Kumari, Subhasis Chaudhuri*

Discriminative features are crucial for several learning applications, such as object detection and classification. Neural networks are extensively used for extracting discriminative features of images and speech signals. However, the lack of large datasets in the haptics d…

  • HOME
  • Publications
  • FRUNI and FTREE synthetic knowledge graphs for evaluating explainability

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.