Authors

* External authors

Venue

Date

Share

Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking

Jim Martin Catacora Ocaña*

Roberto Capobianco

Daniele Nardi*

* External authors

AIxIA 2021

2021

Abstract

Sparse-reward environments are famously challenging for deep reinforcement learning (DRL) algorithms. Yet, the prospect of solving intrinsically sparse tasks in an end-to-end fashion without any extra reward engineering is highly appealing. Such aspiration has recently led to the development of numerous DRL algorithms able to handle sparse-reward environments to some extent. Some methods have even gone one step further and have tackled sparse tasks involving different kinds of distractors (e.g. broken TV, self-moving phantom objects and many more). In this work, we put forward two motivating new sparse-reward environments containing the so-far largely overlooked class of exploration-intensive distractors. Furthermore, we conduct a benchmarking which reveals that state-of-the-art algorithms are not yet all-around suitable for solving the proposed environments.

Related Publications

Identifying Candidates for Protein-Protein Interaction: A Focus on NKp46’s Ligands

EXPLIMED, 2025
Alessia Borghini, Federico Di Valerio, Alessio Ragno*, Roberto Capobianco

Recent advances in protein-protein interaction (PPI) research have harnessed the power of artificialintelligence (AI) to enhance our understanding of protein behaviour. These approaches have becomeindispensable tools in the field of biology and medicine, enabling scientists …

Neural Reward Machines

ECAI, 2025
Elena Umili*, Francesco Argenziano*, Roberto Capobianco

Non-markovian Reinforcement Learning (RL) tasks arevery hard to solve, because agents must consider the entire history ofstate-action pairs to act rationally in the environment. Most works usesymbolic formalisms (as Linear Temporal Logic or automata) to specify the temporall…

Transparent Explainable Logic Layers

ECAI, 2025
Alessio Ragno*, Marc Plantevit, Celine Robardet, Roberto Capobianco

Explainable AI seeks to unveil the intricacies of black box models through post-hoc strategies or self-interpretable models. In this paper, we tackle the problem of building layers that are intrinsically explainable through logical rules. In particular, we address current st…

  • HOME
  • Publications
  • Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.