PrITTI: Primitive-based Generation of
Controllable and Editable 3D Semantic Scenes

1University of Tübingen, Tübingen AI Center, 2Zhejiang University, 3Noah’s Ark Lab, Huawei

PrITTI generates (1) high-quality, controllable 3D semantic urban scenes in a compact primitive-based representation using a latent diffusion model. Our approach enables applications such as (2) scene editing, (3) inpainting, (4) outpainting, and (5) photo-realistic street view synthesis.

Controllable Scene Synthesis

Low Vegetation

Scene #1

Scene #2

Scene #3

Abstract

Large-scale 3D semantic scene generation has predominantly relied on voxel-based representations, which are memory-intensive, bound by fixed resolutions, and challenging to edit. In contrast, primitives represent semantic entities using compact, coarse 3D structures that are easy to manipulate and compose, making them an ideal representation for this task. In this paper, we introduce PrITTI, a latent diffusion-based framework that leverages primitives as the main foundational elements for generating compositional, controllable, and editable 3D semantic scene layouts. Our method adopts a hybrid representation, modeling ground surfaces in a rasterized format while encoding objects as vectorized 3D primitives. This decomposition is also reflected in a structured latent representation that enables flexible scene manipulation of ground and object components. To overcome the orientation ambiguities in conventional encoding methods, we introduce a stable Cholesky-based parameterization that jointly encodes object size and orientation. Experiments on the KITTI-360 dataset show that PrITTI outperforms a voxel-based baseline in generation quality, while reducing memory requirements by up to 3×. In addition, PrITTI enables direct instance-level manipulation of objects in the scene and supports a range of downstream applications, including scene inpainting, outpainting, and photo-realistic street-view synthesis.

Overview

Method Overview. An input 3D semantic layout comprises object primitives, encoded as feature vectors \( \mathbf{F} \), and extruded ground polygons, rasterized into height maps \( \mathbf{H} \) and binary occupancy masks \( \mathbf{B} \). A layout VAE with separate encoder-decoder pairs for objects (\( \mathcal{E}_\mathcal{O}\)/\(\mathcal{D}_\mathcal{O}\)) and ground (\(\mathcal{E}_\mathcal{G}\)/\(\mathcal{D}_\mathcal{G}\)) is first trained to compress the input into a latent representation \( \mathbf{z}_\mathcal{L} \), structured to facilitate disentanglement between the two modalities. In the second stage, we train a latent diffusion model for controllable generation of novel 3D semantic scene layouts.

BibTeX

@article{Tze2025PrITTI,
  author    = {Tze, Christina Ourania and Dauner, Daniel and Liao, Yiyi and Tsishkou, Dzmitry and Geiger, Andreas},
  title     = {PrITTI: Primitive-based Generation of Controllable and Editable 3D Semantic Scenes},
  journal   = {arXiv preprint arXiv:2506.19117},
  year      = {2025},
}