Low Vegetation
Scene #1
Scene #2
Scene #3
Large-scale 3D semantic scene generation has predominantly relied on voxel-based representations, which are memory-intensive, bound by fixed resolutions, and challenging to edit. In contrast, primitives represent semantic entities using compact, coarse 3D structures that are easy to manipulate and compose, making them an ideal representation for this task. In this paper, we introduce PrITTI, a latent diffusion-based framework that leverages primitives as the main foundational elements for generating compositional, controllable, and editable 3D semantic scene layouts. Our method adopts a hybrid representation, modeling ground surfaces in a rasterized format while encoding objects as vectorized 3D primitives. This decomposition is also reflected in a structured latent representation that enables flexible scene manipulation of ground and object components. To overcome the orientation ambiguities in conventional encoding methods, we introduce a stable Cholesky-based parameterization that jointly encodes object size and orientation. Experiments on the KITTI-360 dataset show that PrITTI outperforms a voxel-based baseline in generation quality, while reducing memory requirements by up to 3×. In addition, PrITTI enables direct instance-level manipulation of objects in the scene and supports a range of downstream applications, including scene inpainting, outpainting, and photo-realistic street-view synthesis.
Method Overview. An input 3D semantic layout comprises object primitives, encoded as feature vectors \( \mathbf{F} \), and extruded ground polygons, rasterized into height maps \( \mathbf{H} \) and binary occupancy masks \( \mathbf{B} \). A layout VAE with separate encoder-decoder pairs for objects (\( \mathcal{E}_\mathcal{O}\)/\(\mathcal{D}_\mathcal{O}\)) and ground (\(\mathcal{E}_\mathcal{G}\)/\(\mathcal{D}_\mathcal{G}\)) is first trained to compress the input into a latent representation \( \mathbf{z}_\mathcal{L} \), structured to facilitate disentanglement between the two modalities. In the second stage, we train a latent diffusion model for controllable generation of novel 3D semantic scene layouts.
We generate controllable 3D semantic urban scenes under varying vegetation density conditions. The synthesized layouts exhibit realistic and diverse spatial compositions with clearly structured and well-shaped primitive geometries.
Our method enables scene inpainting through a latent manipulation mechanism, where a binary mask controls which spatial regions are modified. By appropriately configuring this mask, we can edit scenes in diverse ways (e.g., targeting only upper, lower, or lateral regions) while keeping the unmasked content unchanged. The generated regions blend seamlessly with the existing geometry and semantics.
We perform scene outpainting by leveraging the same latent manipulation mechanism used for inpainting, enabling controlled expansion of scenes beyond their original spatial extent. The generated scenes maintain semantic coherence, with realistic and diverse road structures as well as plausible object placements.
Our instance-level primitive-based representation enables intuitive editing of individual objects through direct manipulation of their parameters. Unlike voxel-based methods, editing operations such as rotation, translation, and scaling can be intuitively performed without requiring additional post-processing.
Generated scenes are rendered into semantic maps used to condition photo-realistic street view synthesis, producing street-view images that are both realistic and semantically accurate.
@article{Tze2025PrITTI,
author = {Tze, Christina Ourania and Dauner, Daniel and Liao, Yiyi and Tsishkou, Dzmitry and Geiger, Andreas},
title = {PrITTI: Primitive-based Generation of Controllable and Editable 3D Semantic Scenes},
journal = {arXiv preprint arXiv:2506.19117},
year = {2025},
}