* External authors




Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks

Lemeng Wu*

Bo Liu*

Peter Stone

Qiang Liu*

* External authors




We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks' parameters and architectures. Our method works in a steepest descent fashion, which iteratively finds the best network within a functional neighborhood of the original network that includes a diverse set of candidate network structures. By using Taylor approximation, the optimal network structure in the neighborhood can be found with a greedy selection procedure. We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures that avoid catastrophic forgetting in continual learning. Empirically, firefly descent achieves promising results on both neural architecture search and continual learning. In particular, on a challenging continual image classification task, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.

Related Publications

Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning.

International Conference on Intelligent Robots and Systems, 2023
Xiaohan Zhang*, Yifeng Zhu*, Yan Ding*, Yuqian Jiang*, Yuke Zhu*, Peter Stone, Shiqi Zhang*

In existing task and motion planning (TAMP) research, it is a common assumption that experts manually specify the state space for task-level planning. A welldeveloped state space enables the desirable distribution of limited computational resources between task planning an…

Event Tables for Efficient Experience Replay

CoLLAs, 2023
Varun Kompella, Thomas Walsh, Samuel Barrett, Peter R. Wurman, Peter Stone

Experience replay (ER) is a crucial component of many deep reinforcement learning (RL) systems. However, uniform sampling from an ER buffer can lead to slow convergence and unstable asymptotic behaviors. This paper introduces Stratified Sampling from Event Tables (SSET), whi…

Composing Efficient, Robust Tests for Policy Selection

UAI, 2023
Dustin Morrill, Thomas Walsh, Daniel Hernandez, Peter R. Wurman, Peter Stone

Modern reinforcement learning systems produce many high-quality policies throughout the learning process. However, to choose which policy to actually deploy in the real world, they must be tested under an intractable number of environmental conditions. We introduce RPOSST, a…

  • HOME
  • Publications
  • Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.