Authors
- Silin Gao*
- Sheryl Mathew
- Li Mi
- Sepideh Mamooler
- Mengjie Zhao*
- Hiromi Wakaki*
- Yuki Mitsufuji
- Syrielle Montariol
- Antoine Bosselut*
* External authors
Venue
- CVPR-25
Date
- 2025
VinaBench: Benchmark for Faithful and Consistent Visual Narratives
Silin Gao*
Sheryl Mathew
Li Mi
Sepideh Mamooler
Mengjie Zhao*
Hiromi Wakaki*
Syrielle Montariol
Antoine Bosselut*
* External authors
CVPR-25
2025
Abstract
Visual narrative generation transforms textual narratives into sequences of images illustrating the content of the text. However, generating visual narratives that are faithful to the input text and self-consistent across generated images remains an open challenge, due to the lack of knowledge constraints used for planning the stories. In this work, we propose a new benchmark, VinaBench, to address this challenge. Our benchmark annotates the underlying commonsense and discourse constraints in visual narrative samples, offering systematic scaffolds for learning the implicit strategies of visual storytelling. Based on the incorporated narrative constraints, we further propose novel metrics to closely evaluate the consistency of generated narrative images and the alignment of generations with the input textual narrative. Our results across three generative vision models demonstrate that learning with our VinaBench’s knowledge constraints effectively improves the faithfulness and cohesion of generated visual narratives.
Related Publications
Classifier-Free Guidance (CFG) is a widely used technique for conditional generation and improving sample quality in continuous diffusion models, and recent works have extended it to discrete diffusion. This paper theoretically analyzes CFG in the context of masked discrete …
We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditio…
This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.



