Authors
- Yuanhong Chen
- Kazuki Shimada
- Christian Simon
- Yukara Ikemiya
- Takashi Shibuya
- Yuki Mitsufuji
Venue
- ACMMM-25
Date
- 2025
CCStereo: Audio-Visual Contextual and Contrastive Learning for Binaural Audio Generation
Yuanhong Chen
Kazuki Shimada
Christian Simon
Yukara Ikemiya
ACMMM-25
2025
Abstract
Binaural audio generation (BAG) aims to convert monaural audio to stereo audio using visual prompts, requiring a deep understanding of spatial and semantic information. The success of the BAG systems depends on the effectiveness of cross-modal reasoning and spatial understanding. Current methods have explored the use of visual information as guidance for binaural audio generation. However, they rely solely on cross-attention mechanisms to guide the generation process and under-utilise the temporal and spatial information in video data during training and inference. These limitations result in the loss of fine-grained spatial details and risk overfitting to specific environments, ultimately constraining model performance. In this paper, we address the aforementioned issues by introducing a new audio-visual binaural generation model incorporating an audio-visual conditional normalisation layer that dynamically aligns the mean and variance of the target difference audio features using visual context, along with a new contrastive learning method to enhance spatial sensitivity by mining negative samples from shuffled visual features. We also introduce a cost-efficient way to utilise test-time augmentation in video data to enhance performance. Our approach achieves state-of-the-art generation accuracy on the FAIR-Play, MUSIC-Stereo, and YT-MUSIC benchmarks. Code will be released.
Related Publications
Although recent speech processing technologies have achieved significant improvements in objective metrics, there still remains a gap in human perceptual quality. This paper proposes Diffiner, a novel solution that utilizes the powerful generative capability of diffusion mod…
Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds…
MeanFlow (MF) is a diffusion-motivated generative model that enables efficient few-step generation by learning long jumps directly from noise to data. In practice, it is often used as a latent MF by leveraging the pre-trained Stable Diffusion variational autoencoder (SD-VAE)…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.



