The ModelNet40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We create the ModelNet40-C dataset, which contains 185,100 point clouds from 40 classes, 15 corruption types, and 5 severity levels. We provide a detailed taxonomy of the constructed corruption types. ModelNet40-C is, to the best of our knowledge, the first comprehensive dataset for benchmarking corruption robustness of 3D point cloud classification. This dataset is from our arXiv paper: Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions.
The dataset used in the paper is ShapeNet, a large-scale 3D shape dataset, and ModelNet40, a dataset for 3D object classification.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of different combinations of key components on ModelNet40.
The ModelNet40 zero-shot 3D classification performance of models pretrained on ShapeNet only.
Dataset Card for "modelnet40"
More Information needed
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ModelNet40: Comparison of registration errors at different noise levels.
The dataset used in the paper is ModelNet40-C, which is a 3D point cloud dataset with various corruptions.
This dataset was created by QuyNguyen03
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ModelNet40: Registration results for the invisible point cloud categories.
The dataset used for point cloud classification and segmentation tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ModelNet40: Registration results for an invisible point cloud with gaussian noise.
Datasets
We conduct experiments on three new 3D domain generalization (3DDG) benchmarks proposed by us, as introduced in the next section.
base-to-new class generalization (base2new) cross-dataset generalization (xset) few-shot generalization (fewshot)
The structure of these benchmarks should be organized as follows.
/path/to/Point-PRC
|----data # placed in the same level of `trainers`, `weights`, etc.
|----base2new
|----modelnet40… See the full description on the dataset page: https://huggingface.co/datasets/auniquesun/Point-PRC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
With the advancement of sensor technologies such as LiDAR and depth cameras, the significance of three-dimensional point cloud data in autonomous driving and environment sensing continues to increase.Point cloud registration stands as a fundamental task in constructing high-precision environmental models, with particular significance in overlapping regions where the accuracy of feature extraction and matching directly impacts registration quality. Despite advancements in deep learning approaches, existing methods continue to demonstrate limitations in extracting comprehensive features within these overlapping areas. This study introduces an innovative point cloud registration framework that synergistically combines the K-nearest neighbor (KNN) algorithm with a channel attention mechanism (CAM) to significantly enhance feature extraction and matching capabilities in overlapping regions. Additionally, by designing an effectiveness scoring network, the proposed method improves registration accuracy and enhances system robustness in complex scenarios. Comprehensive evaluations on the ModelNet40 dataset reveal that our approach achieves markedly superior performance metrics, demonstrating significantly lower root mean square error (RMSE) and mean absolute error (MAE) compared to established methods including iterative closest point (ICP), Robust & Efficient Point Cloud Registration using PointNet (PointNetLK), Go-ICP, fast global registration (FGR), deep closest point (DCP), self-supervised learning for a partial-to-partial registration (PRNet), and Iterative Distance-Aware Similarity Matrix Convolution (IDAM). This performance advantage is consistently maintained across various challenging conditions, including unseen shapes, novel categories, and noisy environments. Furthermore, additional experiments on the Stanford dataset validate the applicability and robustness of the proposed method for high-precision 3D shape registration tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Registration performance with noise on the Stanford dataset.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This repository contains ShapeSplats, a large dataset of Gaussian splats spanning 65K objects in 87 unique categories (gathered from ShapeNetCore, ShapeNet-Part, and ModelNet). ModelNet_Splats consists of the 12 objects across 40 categories of ModelNet40. The data is distributed as ply files where information about each Gaussian is encoded in custom vertex attributes. Please see DATA.md for details about the data. If you use the ModelNet_Splats data, you agree to abide by the ModelNet terms of… See the full description on the dataset page: https://huggingface.co/datasets/ShapeSplats/ModelNet_Splats.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Running time of different registration algorithms.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of classification accuracy of proposed defenses with other defense strategies, under various attacks on DGCNN and ModelNet40 datasets.
ULIP-2, a multimodal pre-training framework that leverages state-of-the-art multimodal large language models (LLMs) pre-trained on extensive knowledge to automatically generate holistic language counterparts for 3D objects. We conduct experiments on two large-scale datasets, Objaverse and ShapeNet55, and release our generated three-modality triplet datasets (3D Point Cloud - Image - Language), named "ULIP-Objaverse Triplets" and "ULIP-ShapeNet Triplets". ULIP-2 requires only 3D data itself and eliminates the need for any manual annotation effort, demonstrating its scalability; and ULIP-2 achieves remarkable improvements on downstream zero-shot classification on ModelNet40 (74% Top1 Accuracy). Moreover, ULIP-2 sets a new record on the real-world ScanObjectNN benchmark (91.5% Overall Accuracy) while utilizing only 1.4 million parameters(~10x fewer than current SOTA), signifying a breakthrough in scalable multimodal 3D representation learning without human annotations.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The ModelNet40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere.