Authors

* External authors

Venue

Date

Share

Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking

Jim Martin Catacora Ocaña*

Roberto Capobianco

Daniele Nardi*

* External authors

AIxIA 2021

2021

Abstract

Sparse-reward environments are famously challenging for deep reinforcement learning (DRL) algorithms. Yet, the prospect of solving intrinsically sparse tasks in an end-to-end fashion without any extra reward engineering is highly appealing. Such aspiration has recently led to the development of numerous DRL algorithms able to handle sparse-reward environments to some extent. Some methods have even gone one step further and have tackled sparse tasks involving different kinds of distractors (e.g. broken TV, self-moving phantom objects and many more). In this work, we put forward two motivating new sparse-reward environments containing the so-far largely overlooked class of exploration-intensive distractors. Furthermore, we conduct a benchmarking which reveals that state-of-the-art algorithms are not yet all-around suitable for solving the proposed environments.

Related Publications

Real-time Trajectory Generation via Dynamic Movement Primitives for Autonomous Racing

ACC, 2024
Catherine Weaver*, Roberto Capobianco, Peter R. Wurman, Peter Stone, Masayoshi Tomizuka*

We employ sequences of high-order motion primitives for efficient online trajectory planning, enabling competitive racecar control even when the car deviates from an offline demonstration. Dynamic Movement Primitives (DMPs) utilize a target-driven non-linear differential equ…

Towards a fuller understanding of neurons with Clustered Compositional Explanations

NeurIPS, 2023
Biagio La Rosa*, Leilani H. Gilpin*, Roberto Capobianco

Compositional Explanations is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations used to check the alignment (i.e., the highest ones), thus lacking c…

Memory Replay For Continual Learning With Spiking Neural Networks

IEEE MSLP, 2023
Michela Proietti*, Alessio Ragno*, Roberto Capobianco

Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-performing deep learning models rises as they become more …

  • HOME
  • Publications
  • Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.