Our key innovation is a novel context conditioning mechanism that injects 3D-aware features in an implicit way.
Compared with other implicit methods (e.g. loss-guided), ours:
We first extract 3D-aware scene features by VGGT, construct implicit scene representation by fusing features of content information with its corresponding features of camera-viewpoint information.
We then inject the implicit scene representation with scene context images into the diffusion model via context condition, along with camera and text prompt.
We introduce a simple but effective random-shuffling strategy for scene images during training, to further strengthen the alignment between the scene images and their implicit 3D encoding
We render Scene-Decoupled Video Dataset in UE5, generating a dataset with
46K video-scene image pairs from different scenes
in 35 high-quality 3D environments with different camera trajectories.
A person is standing in front of a brick wall.
A person is walking through a modern, well-lit room with a yellow wall and a column.
A train moving along a set of tracks, passing by a large, rusty metal gate.
A gradual transition from a view of a fire extinguisher mounted on a wooden door to a signage.
A vibrant, animated cityscape with a focus on a series of buildings connected by a network of cables.
The video showcases a grand, ornate church interior with a long, polished wooden table at the front.
The video showcases a cozy and well-decorated living room with a warm and inviting atmosphere.
Note: Since this Out-of-Domain dataset (360-DiT) is designed for panoramic content and lacks scene ground truth for translational motion, we specifically demonstrate results for panning camera trajectories in this section.
Cinematic video production requires control over scene-subject composition and camera movement, but live-action shooting remains costly due to the need for constructing physical sets. To address this, we introduce the task of cinematic video generation with decoupled scene context: given multiple images of a static environment, the goal is to synthesize high-quality videos featuring dynamic subject while preserving the underlying scene consistency and following a user-specified camera trajectory. We present CineScene, a framework that leverages implicit 3D-aware scene representation for cinematic video generation. Our key innovation is a novel context conditioning mechanism that injects 3D-aware features in an implicit way: By encoding scene images into visual representations through VGGT, CineScene injects spatial priors into a pretrained text-to-video generation model by additional context concatenation, enabling camera-controlled video synthesis with consistent scenes and dynamic subjects. To further enhance the model's robustness, we introduce a simple yet effective random-shuffling strategy for the input scene images during training. To address the lack of training data, we construct a scene-decoupled dataset with Unreal Engine 5, containing paired videos of scenes with and without dynamic subjects, panoramic images representing the underlying static scene, along with their camera trajectories. Experiments show that CineScene achieves state-of-the-art performance in scene-consistent cinematic video generation, handling large camera movements and demonstrating generalization across diverse environments.
@misc{huang2026cinesceneimplicit3deffective,
title={CineScene: Implicit 3D as Effective Scene Representation for Cinematic Video Generation},
author={Kaiyi Huang and Yukun Huang and Yu Li and Jianhong Bai and Xintao Wang and Zinan Lin and Xuefei Ning and Jiwen Yu and Pengfei Wan and Yu Wang and Xihui Liu},
year={2026},
eprint={2602.06959},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.06959},
}