Skip to content

Hearing Anything Anywhere

Abstract

Multimodal representation learning to integrate different modalities, such as text, vision, and audio is important for real-world applications. The symmetric InfoNCE loss proposed in CLIP is a key concept in multimodal representation learning. In this work, we provide a theoretical understanding of the symmetric InfoNCE loss through the lens of the pointwise mutual information and show that encoders that achieve the optimal similarity in the pretraining provide a good representation for downstream classification tasks under mild assumptions. Based on our theoretical results, we also propose a new similarity metric for multimodal contrastive learning by utilizing a nonlinear kernel to enrich the capability. To verify the effectiveness of the proposed method, we demonstrate pretraining of multimodal representation models on the Conceptual Caption datasets and evaluate zero-shot classification and linear classification on common benchmark datasets.

Authors

  • Mason Long Wang*
  • Samuel Clarke
  • Ruohan Gao
  • Shangzhe Wu
  • Jiajun Wu

*External Authors

Venue

CVPR-2024

Date

2024

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation