Link Search Menu Expand Document

ALSO: Automotive Lidar Self-supervision by Occupancy estimation

A. Boulch, C. Sautier, B. Michele, G. Puy, R. Marlet

Published in Computer Vision and Pattern Recognition (CVPR), 2023

PaperArxivCode

Abstract

We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds. The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled, and to use the underlying latent vectors as input to the perception head. The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information, that can be used to boost an actual perception task. This principle has a very simple formulation, which makes it both easy to implement and widely applicable to a large range of 3D sensors and deep networks performing semantic segmentation or object detection. In fact, it supports a single-stream pipeline, as opposed to most contrastive learning approaches, allowing training on limited resources. We conducted extensive experiments on various autonomous driving datasets, involving very different kinds of lidars, for both semantic segmentation and object detection. The results show the effectiveness of our method to learn useful representations without any annotation, compared to existing approaches.

Citation

@InProceedings{ALSO,
author = {Alexandre Boulch and Corentin Sautier and Björn Michele and Gilles Puy and Renaud Marlet},
title = {ALSO: Automotive Lidar Self-supervision by Occupancy estimation},
booktitle = {International Conference on Computer Vision and Pattern Recognition (CVPR)},
year = 2023,
}