Authors

Venue

Date

Share

Reductive, Exclusionary, Normalising: The Limits of Generative AI

Fabio Morreale

Marco A. Martínez-Ramírez

Raul Masu

WeiHsiang Liao

Yuki Mitsufuji

TISMIR-25

2025

Abstract

Up until recently, most approaches to music generation were based on deductive logic: generative rules were devised on the basis of musicians’ preferences, subjective appreciation and dominant music theories. Machine learning (ML) introduced a paradigm shift: vast datasets of existing music are used to train neural networks capable of generating new compositions supposedly without embedding predefined musical rules. We first outline how rule-based systems depend on a series of reductionist processes and assumptions about music that affect what can be generated. We then examine ML-based generative music systems and show that they are still unable to generate the full theoretical space of musical possibilities, they are still grounded on reductionist processes and their soundness is still affected by unquestioned assumptions. We also identify the limitations of semantic bridges used to form musical meaning and the epistemic framework of cascading modules. Finally, we propose that the artistic potential of ML systems might lie beyond attempts to replicate human music-making methods.

Related Publications

Diffusion-based Signal Refiner for Speech Enhancement and Separation

IEEE, 2026
Ryosuke Sawata, Masato Hirano*, Naoki Murata, Shusuke Takahashi*, Yuki Mitsufuji

Although recent speech processing technologies have achieved significant improvements in objective metrics, there still remains a gap in human perceptual quality. This paper proposes Diffiner, a novel solution that utilizes the powerful generative capability of diffusion mod…

PAVAS: Physics-Aware Video-to-Audio Synthesis

CVPR, 2026
Oh Hyun-Bin*, Yuhta Takida, Toshimitsu Uesaka, Tae-Hyun Oh*, Yuki Mitsufuji

Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds…

MeanFlow Transformers with Representation Autoencoders

CVPR, 2026
Zheyuan Hu*, Chieh-Hsin Lai, Ge Wu*, Yuki Mitsufuji, Stefano Ermon*

MeanFlow (MF) is a diffusion-motivated generative model that enables efficient few-step generation by learning long jumps directly from noise to data. In practice, it is often used as a latent MF by leveraging the pre-trained Stable Diffusion variational autoencoder (SD-VAE)…

  • HOME
  • Publications
  • Reductive, Exclusionary, Normalising: The Limits of Generative AI

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.