Varcity 3D Challenge online!
The VarCity 3D Semantic Segmentation Challenge based on our ECCV 2014 paper is now online for data and evaluation code!

Dataset request form
The dataset request form for the 3D Challenge!

Leadership board
The leadership board submissions for the 3D Challenge!

Varcity Videos!
The VarCity Video Collection based on our previous papers is online!


October, 8th 2015: 3D Challenge v2 online!
The VarCity 3D Semantic Segmentation Challenge is now updated to version 2!

Sep, 21th 2015: Showcase!
VarCity was showcased by the ERC Funding Agency.

June, 11th 2015: Workshop Meeting!
VarCity was presented at the Workshop for Semantics for Visual Reconstruction and SLAM at CVPR 2015, Boston.

October, 1st 2014: 3D Challenge online!
VarCity 3D Semantic Segmentation Challenge based on our ECCV 2014 paper is now online for data and evaluation code!

3D Semantic Segmentation and Procedural Modelling Challenge

There is an increasing interest in semantically annotated 3D models, e.g. of cities.
The typical approaches start with the semantic labelling of all the images used for the 3D model.

In this challlenge, we provide a new dataset and tasks for pushing the limits of 3D modelling.

So far this is the largest and most detailed dataset available including a dense surface and semantic labels for urban classes.
The dataset consists of 700 meters along a street annotated with pixel-level labels for facade details such as windows, doors, balconies, roof, etc.

We provide the images, labels and indexes to the 3D surface together with evaluation source code for comparing different tasks.
Tasks are 3D mesh/pcloud and 2D image labeling, view selection and will further be extended in the future with more.


View Reduction!
This dataset allows the evaluation of semantic classification methods in the following tasks:


ETHZ CVL RueMonge 2014 dataset
This 3d+annotation contains semantic segmentations for the dataset. READY!

If you are interested, enter this dataset request form and I will contact you.

This dataset comes with the following data:

  1. 2D images for training and testing, labelled in 8 classes
  2. 3D mesh (faces, vertices) as a 3D representation
  3. Index files for faces to pixels in each image
  4. Training / testing splits as txt files
  5. Sample files for classification results
  6. Sample source code for loading and evaluation (see below)
This sample source code allows the following functions
  1. Evaluate 2D/3D labeling results by (classwise or PASCAL IOU) accuracy.
  2. Examples for loading 2D image data into the 3D mesh (color, labels, probabilities)
  3. Fusion of multiview data by the SUMALL principle (see paper)
  4. Mesh labelling optimization via a graphcut approach.
  5. Various helper tools
The protocol for training / testing is: Results for ETHZ CVL RueMonge 2014 for the different tasks are:

-[2D IOU %][3D IOU %][3D IOU %][Speedup][Speedup]
[1] MAP38.72%35.77%---
[1] GCO40.92%37.33%-11.9x7.1x
[1] GCO+recode41.34%41.92%42.32%--

[1] Learning Where To Classify In Multi-View Semantic Segmentation, H. Riemenschneider, A. Bodis-Szomoru, J. Weissenberg, L. Van Gool, ECCV 2014

Submit your results here!

This page has been edited by Hayko Riemenschneider

Locations of visitors to this page