Authors
- Silin Gao*
- Sheryl Mathew
- Li Mi
- Sepideh Mamooler
- Mengjie Zhao*
- Hiromi Wakaki*
- Yuki Mitsufuji
- Syrielle Montariol
- Antoine Bosselut*
* External authors
Venue
- CVPR-25
Date
- 2025
VinaBench: Benchmark for Faithful and Consistent Visual Narratives
Silin Gao*
Sheryl Mathew
Li Mi
Sepideh Mamooler
Mengjie Zhao*
Hiromi Wakaki*
Syrielle Montariol
Antoine Bosselut*
* External authors
CVPR-25
2025
Abstract
Visual narrative generation transforms textual narratives into sequences of images illustrating the content of the text. However, generating visual narratives that are faithful to the input text and self-consistent across generated images remains an open challenge, due to the lack of knowledge constraints used for planning the stories. In this work, we propose a new benchmark, VinaBench, to address this challenge. Our benchmark annotates the underlying commonsense and discourse constraints in visual narrative samples, offering systematic scaffolds for learning the implicit strategies of visual storytelling. Based on the incorporated narrative constraints, we further propose novel metrics to closely evaluate the consistency of generated narrative images and the alignment of generations with the input textual narrative. Our results across three generative vision models demonstrate that learning with our VinaBench’s knowledge constraints effectively improves the faithfulness and cohesion of generated visual narratives.
Related Publications
Although recent speech processing technologies have achieved significant improvements in objective metrics, there still remains a gap in human perceptual quality. This paper proposes Diffiner, a novel solution that utilizes the powerful generative capability of diffusion mod…
Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds…
MeanFlow (MF) is a diffusion-motivated generative model that enables efficient few-step generation by learning long jumps directly from noise to data. In practice, it is often used as a latent MF by leveraging the pre-trained Stable Diffusion variational autoencoder (SD-VAE)…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.



