Authors

* External authors

Venue

Date

Share

Reverse Engineering of Music Mixing Graphs With Differentiable Processors and Iterative Pruning

Sungho Lee*

Marco A. Martínez-Ramírez

Wei-Hsiang Liao

Stefan Uhlich*

Giorgio Fabbro*

Kyogu Lee*

Yuki Mitsufuji

* External authors

JAES

2025

Abstract

Reverse engineering of music mixes aims to uncover how dry source signals are processed and combined to produce a final mix. In this paper, prior works are extended to reflect the compositional nature of mixing and search for a graph of audio processors. First, a mixing console is constructed, applying all available processors to every track and subgroup. With differentiable processor implementations, their parameters are optimized with gradient descent. Next, the process of removing negligible processors and fine-tuning the remaining ones is repeated. This way, the quality of the full mixing console can be preserved while removing approximately two-thirds of the processors. The proposed method can be used not only to analyze individual music mixes but also to collect large-scale graph data for downstream tasks such as automatic mixing. Especially for the latter purpose, efficient implementation of the search is crucial. To this end, an efficient batch-processing method that computes multiple processors in parallel is presented. Also, the “dry/wet” parameter of each processor is exploited to accelerate the search. Extensive quantitative and qualitative analyses are conducted to evaluate the proposed method’s performance, behavior, and computational cost.

Related Publications

Diffusion-based Signal Refiner for Speech Enhancement and Separation

IEEE, 2026
Ryosuke Sawata, Masato Hirano*, Naoki Murata, Shusuke Takahashi*, Yuki Mitsufuji

Although recent speech processing technologies have achieved significant improvements in objective metrics, there still remains a gap in human perceptual quality. This paper proposes Diffiner, a novel solution that utilizes the powerful generative capability of diffusion mod…

PAVAS: Physics-Aware Video-to-Audio Synthesis

CVPR, 2026
Oh Hyun-Bin*, Yuhta Takida, Toshimitsu Uesaka, Tae-Hyun Oh*, Yuki Mitsufuji

Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds…

MeanFlow Transformers with Representation Autoencoders

CVPR, 2026
Zheyuan Hu*, Chieh-Hsin Lai, Ge Wu*, Yuki Mitsufuji, Stefano Ermon*

MeanFlow (MF) is a diffusion-motivated generative model that enables efficient few-step generation by learning long jumps directly from noise to data. In practice, it is often used as a latent MF by leveraging the pre-trained Stable Diffusion variational autoencoder (SD-VAE)…

  • HOME
  • Publications
  • Reverse Engineering of Music Mixing Graphs With Differentiable Processors and Iterative Pruning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.