Authors

Venue

Date

Share

CCStereo: Audio-Visual Contextual and Contrastive Learning for Binaural Audio Generation

Yuanhong Chen

Kazuki Shimada

Christian Simon

Yukara Ikemiya

Takashi Shibuya

Yuki Mitsufuji

ACMMM-25

2025

Abstract

Binaural audio generation (BAG) aims to convert monaural audio to stereo audio using visual prompts, requiring a deep understanding of spatial and semantic information. The success of the BAG systems depends on the effectiveness of cross-modal reasoning and spatial understanding. Current methods have explored the use of visual information as guidance for binaural audio generation. However, they rely solely on cross-attention mechanisms to guide the generation process and under-utilise the temporal and spatial information in video data during training and inference. These limitations result in the loss of fine-grained spatial details and risk overfitting to specific environments, ultimately constraining model performance. In this paper, we address the aforementioned issues by introducing a new audio-visual binaural generation model incorporating an audio-visual conditional normalisation layer that dynamically aligns the mean and variance of the target difference audio features using visual context, along with a new contrastive learning method to enhance spatial sensitivity by mining negative samples from shuffled visual features. We also introduce a cost-efficient way to utilise test-time augmentation in video data to enhance performance. Our approach achieves state-of-the-art generation accuracy on the FAIR-Play, MUSIC-Stereo, and YT-MUSIC benchmarks. Code will be released.

Related Publications

Vid-CamEdit: Video Camera Trajectory Editing with Generative Rendering from Estimated Geometry

AAAI, 2025
Junyoung Seo, Jisang Han, Jaewoo Jung, Siyoon Jin, Joungbin Lee, Takuya Narihira, Kazumi Fukuda, Takashi Shibuya, Donghoon Ahn, Shoukang Hu, Seungryong Kim*, Yuki Mitsufuji

We introduce Vid-CamEdit, a novel framework for video camera trajectory editing, enabling the re-synthesis of monocular videos along user-defined camera paths. This task is challenging due to its ill-posed nature and the limited multi-view video data for training. Traditiona…

SteerMusic: Enhanced Musical Consistency for Zero-shot Text-Guided and Personalized Music Editing

AAAI, 2025
Xinlei Niu, Kin Wai Cheuk, Jing Zhang, Naoki Murata, Chieh-Hsin Lai, Michele Mancusi, Woosung Choi, Giorgio Fabbro*, Wei-Hsiang Liao, Charles Patrick Martin, Yuki Mitsufuji

Music editing is an important step in music production, which has broad applications, including game development and film production. Most existing zero-shot text-guided methods rely on pretrained diffusion models by involving forward-backward diffusion processes for editing…

Music Arena: Live Evaluation for Text-to-Music

NeurIPS, 2025
Yonghyun Kim, Wayne Chi, Anastasios N. Angelopoulos, Wei-Lin Chiang, Koichi Saito, Shinji Watanabe, Yuki Mitsufuji, Chris Donahue

We present Music Arena, an open platform for scalable human preference evaluation of text-to-music (TTM) models. Soliciting human preferences via listening studies is the gold standard for evaluation in TTM, but these studies are expensive to conduct and difficult to compare…

  • HOME
  • Publications
  • CCStereo: Audio-Visual Contextual and Contrastive Learning for Binaural Audio Generation

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.