Authors
- Junyoung Seo*
- Jisang Han*
- Jaewoo Jung*
- Siyoon Jin
- JoungBin Lee*
- Takuya Narihira
- Kazumi Fukuda
- Takashi Shibuya
- Donghoon Ahn
- Shoukang Hu
- Seungryong Kim*
- Yuki Mitsufuji
* External authors
Venue
- AAAI-26
Date
- 2025
Vid-CamEdit: Video Camera Trajectory Editing with Generative Rendering from Estimated Geometry
Junyoung Seo*
Jisang Han*
Jaewoo Jung*
Siyoon Jin
JoungBin Lee*
Takuya Narihira
Donghoon Ahn
Seungryong Kim*
* External authors
AAAI-26
2025
Abstract
We introduce Vid-CamEdit, a novel framework for video camera trajectory editing, enabling the re-synthesis of monocular videos along user-defined camera paths. This task is challenging due to its ill-posed nature and the limited multi-view video data for training. Traditional reconstruction methods struggle with extreme trajectory changes, and existing generative models for dynamic novel view synthesis cannot handle in-the-wild videos. Our approach consists of two steps: estimating temporally consistent geometry, and generative rendering guided by this geometry. By integrating geometric priors, the generative model focuses on synthesizing realistic details where the estimated geometry is uncertain. We eliminate the need for extensive 4D training data through a factorized fine-tuning framework that separately trains spatial and temporal components using multi-view image and video data. Our method outperforms baselines in producing plausible videos from novel camera trajectories, especially in extreme extrapolation scenarios on real-world footage.
Related Publications
Although recent speech processing technologies have achieved significant improvements in objective metrics, there still remains a gap in human perceptual quality. This paper proposes Diffiner, a novel solution that utilizes the powerful generative capability of diffusion mod…
Recent advances in Video-to-Audio (V2A) generation have achieved impressive perceptual quality and temporal synchronization, yet most models remain appearance-driven, capturing visual-acoustic correlations without considering the physical factors that shape real-world sounds…
MeanFlow (MF) is a diffusion-motivated generative model that enables efficient few-step generation by learning long jumps directly from noise to data. In practice, it is often used as a latent MF by leveraging the pre-trained Stable Diffusion variational autoencoder (SD-VAE)…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.



