[ECCV'24] NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for NeRFs

Описание к видео [ECCV'24] NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for NeRFs

ECCV'24 paper

Project page: https://nerf-mae.github.io/
arXiv: https://arxiv.org/abs/2404.01300
Paper : https://arxiv.org/pdf/2404.01300.pdf
Pytorch Code: https://github.com/zubair-irshad/NeRF...

Abstract:
Neural fields have shown remarkable success in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Given the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation can be irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by first masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We are able to pretrain this representation at scale on our proposed carefully curated posed-RGB data, totaling over 1.6 million images. Once pretrained, the encoder backbone is used for effective 3D transfer learning. Our novel self-supervised pretraining approach for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks including 3D object detection, voxel super-resolution, and semantic labeling. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection.

Комментарии

Информация по комментариям в разработке