Authors
- Yuanhong Chen
- Kazuki Shimada
- Christian Simon
- Yukara Ikemiya
- Takashi Shibuya
- Yuki Mitsufuji
Venue
- ACMMM-25
Date
- 2025
CCStereo: Audio-Visual Contextual and Contrastive Learning for Binaural Audio Generation
Yuanhong Chen
Kazuki Shimada
Christian Simon
Yukara Ikemiya
ACMMM-25
2025
Abstract
Binaural audio generation (BAG) aims to convert monaural audio to stereo audio using visual prompts, requiring a deep understanding of spatial and semantic information. The success of the BAG systems depends on the effectiveness of cross-modal reasoning and spatial understanding. Current methods have explored the use of visual information as guidance for binaural audio generation. However, they rely solely on cross-attention mechanisms to guide the generation process and under-utilise the temporal and spatial information in video data during training and inference. These limitations result in the loss of fine-grained spatial details and risk overfitting to specific environments, ultimately constraining model performance. In this paper, we address the aforementioned issues by introducing a new audio-visual binaural generation model incorporating an audio-visual conditional normalisation layer that dynamically aligns the mean and variance of the target difference audio features using visual context, along with a new contrastive learning method to enhance spatial sensitivity by mining negative samples from shuffled visual features. We also introduce a cost-efficient way to utilise test-time augmentation in video data to enhance performance. Our approach achieves state-of-the-art generation accuracy on the FAIR-Play, MUSIC-Stereo, and YT-MUSIC benchmarks. Code will be released.
Related Publications
Classifier-Free Guidance (CFG) is a widely used technique for conditional generation and improving sample quality in continuous diffusion models, and recent works have extended it to discrete diffusion. This paper theoretically analyzes CFG in the context of masked discrete …
We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditio…
This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.



