GenXD: Generating Any 3D and 4D Scenes

1National University of Singapore,   2Microsoft

Paper Code Model Model Data

Abstract

Recent developments in 2D visual generation have been remarkably successful. However, 3D and 4D generation remain challenging in real-world applications due to the lack of large-scale 4D data and effective model design. In this paper, we propose to jointly investigate general 3D and 4D generation by leveraging camera and object movements commonly observed in daily life. Due to the lack of real-world 4D data in the community, we first propose a data curation pipeline to obtain camera poses and object motion strength from videos. Based on this pipeline, we introduce a large-scale real-world 4D scene dataset: CamVid-30K. By leveraging all the 3D and 4D data, we develop our framework, GenXD, which allows us to produce any 3D or 4D scene. We propose multiview-temporal modules, which disentangle camera and object movements, to seamlessly learn from both 3D and 4D data. Additionally, GenXD employs masked latent conditions to support a variety of conditioning views. GenXD can generate videos that follow the camera trajectory as well as consistent 3D views that can be lifted into 3D representations. We perform extensive evaluations across various real-world and synthetic datasets, demonstrating GenXD's effectiveness and versatility compared to previous methods in 3D and 4D generation.


GenXD: A Versatile Model for Multiple Settings



Scene trained on 1 views. Try selecting different settings and scenes!

robot lighthouse rustic_wooden astronaut bear pile

The overall framework of GenXD


GenXD leverages mask latent conditioned diffusion model to generate 3D and 4D samples with both camera (colorful map) and image (binary map) conditions. In addition, multiview-temporal modules together with alpha-fusing are proposed to effectively disentangle and fuse multiview and temporal information.



CamVid-30K: A Large-scale 4D Scene Dataset



CamVid-30K contains 30K videos with 4D annotations, including camera poses and object motions. The dataset can be used for various dynamic 3D tasks.

CamVid-30K: data duration pipeline


The pipeline for CamVid-30K data curation, including (a) camera pose estimation and (b) object motion estimation. We first leverage mask-based SfM (masks are overlayed to images in (a) for visualization) to estimate camera pose and reconstruct 3D point clouds of static parts. Then relative depth is aligned with the sparse depth and project the tracking keypoints to consecutive frame for object motion estimation.



Citation

Acknowledgements

We would like to thank Dejia Xu and Yuyang Yin for their valuable discussions on the 4D data.

The website template was borrowed from Michaël Gharbi, Ref-NeRF, ReconFusion and CAT3D.