Authors

* External authors

Venue

Date

Share

Logic Tensor Networks

Samy Badreddine

Artur d'Avila Garcez*

Luciano Serafini*

Michael Spranger

* External authors

Artificial Intelligence (journal, Elsevier)

2022

Abstract

Attempts at combining logic and neural networks into neurosymbolic approaches have been on the increase in recent years. In a neurosymbolic system, symbolic knowledge assists deep learning, which typically uses a sub-symbolic distributed representation, to learn and reason at a higher level of abstraction. We present Logic Tensor Networks (LTN), a neurosymbolic framework that supports querying, learning and reasoning with both rich data and abstract knowledge about the world. LTN introduces a fully differentiable logical language, called Real Logic, whereby the elements of a first-order logic signature are grounded onto data using neural computational graphs and firstorder fuzzy logic semantics. We show that LTN provides a uniform language to represent and compute efficiently many of the most important AI tasks such as multilabel classification, relational learning, data clustering, semi-supervised learning, regression, embedding learning and query answering. We implement and illustrate each of the above tasks with several simple explanatory examples using TensorFlow 2. The results indicate that LTN can be a general and powerful framework for
neurosymbolic AI.

Related Publications

From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks

ICLR, 2026
Ouns El Harzli, Bernardo Cuenca Grau, Artur d'Avila Garcez*, Ian Horrocks, Tarek R Besold

Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language with common semantics, characterized by fibred models. Inspired by this formalism, fibring of neural networks was introduced as a neurosymbo…

Literature-based Hypothesis Generation: Predicting the evolution of scientific literature to support scientists

AI4X, 2025
Tarek R Besold, Uchenna Akujuobi, Samy Badreddine, Jihun Choi, Hatem ElShazly, Frederick Gifford, Kana Maruyama, Kae Nagano, Pablo Sanchez Martin, Thiviyan Thanapalasingam, Alessandra Toniato, Christoph Wehner

Science is advancing at an increasingly quick pace, as evidenced, for instance, by the exponential growth in the number of published research articles per year [1]. On the one hand, this poses anincreasingly pressing challenge: Effectively navigating this ever-growing body o…

Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget

CVPR, 2025
Vikash Sehwag, Xianghao Kong, Jingtao Li, Michael Spranger, Lingjuan Lyu

As scaling laws in generative AI push performance, they simultaneously concentrate the development of these models among actors with large computational resources. With a focus on text-to-image (T2I) generative models, we aim to unlock this bottleneck by demonstrating very l…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.