Recognizing scenes and objects in 3D from a single image is a longstanding goal of computer vision with applications in robotics and AR/VR. For 2D recognition, large datasets and scalable solutions have led to unprecedented advances. In 3D, existing benchmarks are small in size and approaches specialize in few object categories and specific domains, e.g. urban driving scenes. Motivated by the success of 2D recognition, we revisit the task of 3D object detection by introducing a large benchmark, called Omni3D. Omni3D re-purposes and combines existing datasets resulting in 234k images annotated with more than 3 million instances and 98 categories. 3D detection at such scale is challenging due to variations in camera intrinsics and the rich diversity of scene and object types. We propose a model, called Cube R-CNN, designed to generalize across camera and scene types with a unified approach. We show that Cube R-CNN outperforms prior works on the larger Omni3D and existing benchmarks. Finally, we prove that Omni3D is a powerful dataset for 3D object recognition and show that it improves single-dataset performance and can accelerate learning on new smaller datasets via pre-training.
We curate a large and diverse 3D object detection dataset with the following properties:
To support a new 3D AP metric we have implemented a fast and accurate 3D IoU algorithm (source).
We design a simple yet effective model for general 3D object detection which leverages many key advances from the monocular object detection techniques of recent years. At its core, our method build on Faster R-CNN (detectron2) to parameterize a 3D head in order to estimate a virtual 3D cuboid, which is then compared to 3D GT vertices.
Here we show the generalization power of Cube R-CNN trained on Omni3D, by predicting 3D objects in unseen COCO images. Note that we do not have known camera intrinsics for COCO and as such we set the focal length to a constant scale from its image height and assume the principal point is centered in the image, consistent for all below examples.
We also demonstrate the zero-shot performance of Cube R-CNN using unseen Project Aria data. The demo uses known camera poses and a simple implementation for object tracking/smoothing.
The tracking is built using a custom Hungarian algorithm with the following features used to process the instantaneously predicted 3D cuboids into smooth tracks:
@inproceedings{brazil2023omni3d,
author = {Garrick Brazil and Abhinav Kumar and Julian Straub and Nikhila Ravi and Justin Johnson and Georgia Gkioxari},
title = {{Omni3D}: A Large Benchmark and Model for {3D} Object Detection in the Wild},
booktitle = {CVPR},
address = {Vancouver, Canada},
month = {June},
year = {2023},
organization = {IEEE},
}