* External authors




Assessing SATNet's Ability to Solve the Symbol Grounding Problem

Michael Spranger

Oscar Chang*

Lampros Flokas*

Hod Lipson*

* External authors




SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network. It had been shown to solve Sudoku puzzles visually from examples of puzzle digit images, and was heralded as an impressive achievement towards the longstanding AI goal of combining pattern recognition with logical reasoning. In this paper, we clarify SATNet's capabilities by showing that in the absence of intermediate labels that identify individual Sudoku digit images with their logical representations, SATNet completely fails at visual Sudoku (0% test accuracy). More generally, the failure can be pinpointed to its inability to learn to assign symbols to perceptual phenomena, also known as the symbol grounding problem, which has long been thought to be a prerequisite for intelligent agents to perform real-world logical reasoning. We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general. Naive applications of SATNet on this test lead to performance worse than that of models without logical reasoning capabilities. We report on the causes of SATNet's failure and how to prevent them.

Related Publications

Improving Artificial Intelligence with Games

Science, 2023
Peter R. Wurman, Peter Stone, Michael Spranger

Games continue to drive progress in the development of artificial intelligence.

MocoSFL: enabling cross-client collaborative self-supervised learning

ICLR, 2023
Jingtao Li, Lingjuan Lyu, Daisuke Iso, Chaitali Chakrabarti*, Michael Spranger

Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Fe…

MECTA: Memory-Economic Continual Test-Time Model Adaptation

ICLR, 2023
Junyuan Hong, Lingjuan Lyu, Jiayu Zhou*, Michael Spranger

Continual Test-time Adaptation (CTA) is a promising art to secure accuracy gains in continually-changing environments. The state-of-the-art adaptations improve out-of-distribution model accuracy via computation-efficient online test-time gradient descents but meanwhile cost …

  • HOME
  • Publications
  • Assessing SATNet's Ability to Solve the Symbol Grounding Problem


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.