Kazumi Fukuda
Publications
3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation
ICLR, 2026 | JoungBin Lee*, Jaewoo Jung*, Jisang Han*, Takuya Narihira, Kazumi Fukuda, Junyoung Seo*, Sunghwan Hong*, Yuki Mitsufuji, Seungryong Kim*
We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditio...
Vid-CamEdit: Video Camera Trajectory Editing with Generative Rendering from Estimated Geometry
AAAI, 2025 | Junyoung Seo*, Jisang Han*, Jaewoo Jung*, Siyoon Jin, JoungBin Lee*, Takuya Narihira, Kazumi Fukuda, Takashi Shibuya, Donghoon Ahn, Shoukang Hu, Seungryong Kim*, Yuki Mitsufuji
We introduce Vid-CamEdit, a novel framework for video camera trajectory editing, enabling the re-synthesis of monocular videos along user-defined camera paths. This task is challenging due to its ill-posed nature and the limited multi-view video data for training. Traditiona...
GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping
NEURIPS, 2024 | Junyoung Seo*, Kazumi Fukuda, Takashi Shibuya, Takuya Narihira, Naoki Murata, Shoukang Hu, Chieh-Hsin Lai, Seungryong Kim*, Yuki Mitsufuji
Generating novel views from a single image remains a challenging task due to the complexity of 3D scenes and the limited diversity in the existing multi-view datasets to train a model on. Recent research combining large-scale text-to-image (T2I) models with monocular depth e...