The ModelNet-40 dataset contains 2468 CAD models that correspond to 40 classes.
Dataset Card for "modelnet40-2048"
More Information needed
ModelNet40-C is a comprehensive dataset to benchmark the corruption robustness of 3D point cloud recognition. We create ModelNet40-C based on the ModelNet40 validation set with 15 corruption types and 5 severity levels for each corruption type including density, noise, and transformation corruption patterns. Our dataset contains 185,000 distinct point clouds that help provide a comprehensive picture of model robustness.
The goal of Princeton ModelNet project is to provide researchers in computer vision, computer graphics, robotics and cognitive science, with a comprehensive clean collection of 3D CAD models for objects.
ModelNet40 dataset contains 12,311 pre-aligned shapes from 40 categories, which are split into 9,843 (80%) for training and 2,468 (20%) for testing. The CAD models are in Object File Format (OFF). Matlab functions to read and visualize OFF files are provided in Princeton Vision Toolkit (PVT).
To build the core of the dataset, a list of the most common object categories in the world was compiled, using the statistics obtained from the SUN database. Once a vocabulary for objects was established, 3D CAD models belonging to each object category was collected using online search engines by querying for each object category term. Then, human workers on Amazon Mechanical Turk were hired to manually decide whether each CAD model belonged to the specified cateogries, using an in-house designed tool with quality control. To obtain a very clean dataset, 10 popular object categories were chosen while manually deleted the models that did not belong to these categories. Furthermore, manual alignment of orientation of CAD models was performed for the 10-class subset.
This dataset was obtained from Princeton ModelNet's official dataset homepage. For more details on the dataset refer the related publication - 3D ShapeNets: A Deep Representation for Volumetric Shapes. Work based on the dataset should cite:
@inproceedings{wu20153d,
title={3d shapenets: A deep representation for volumetric shapes},
author={Wu, Zhirong and Song, Shuran and Khosla, Aditya and Yu, Fisher and Zhang, Linguang and Tang, Xiaoou and Xiao, Jianxiong},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1912--1920},
year={2015}
}
All CAD models were downloaded from the Internet and the original authors hold the copyright of the CAD models. The labels of the data were obtained by authors via Amazon Mechanical Turk service and it is provided freely. This dataset is provided for the convenience of academic research only.
Banner Image Credits - From AnTao97's PointCloudDatasets repo [rendered with Mitsuba2]
The dataset used in the paper is ShapeNet, a large-scale 3D shape dataset, and ModelNet40, a dataset for 3D object classification.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ModelNet40: Comparison of registration errors at different noise levels.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Mind.Chen
Released under Apache 2.0
The dataset used in the paper is ModelNet40 and ModelNet40-C, which are 3D point cloud datasets.
The ModelNet40 zero-shot 3D classification performance of models pretrained on ShapeNet only.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ModelNet40: Registration results for the invisible point cloud categories.
The dataset used in the paper is ModelNet40-C, which is a 3D point cloud dataset with various corruptions.
This dataset was created by QuyNguyen03
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ModelNet40: Registration results for an invisible point cloud with gaussian noise.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This repository contains ShapeSplats, a large dataset of Gaussian splats spanning 65K objects in 87 unique categories (gathered from ShapeNetCore, ShapeNet-Part, and ModelNet). ModelNet_Splats consists of the 12 objects across 40 categories of ModelNet40. The data is distributed as ply files where information about each Gaussian is encoded in custom vertex attributes. Please see DATA.md for details about the data. If you use the ModelNet_Splats data, you agree to abide by the ModelNet terms of… See the full description on the dataset page: https://huggingface.co/datasets/ShapeSplats/ModelNet_Splats.
Datasets
We conduct experiments on three new 3D domain generalization (3DDG) benchmarks proposed by us, as introduced in the next section.
base-to-new class generalization (base2new) cross-dataset generalization (xset) few-shot generalization (fewshot)
The structure of these benchmarks should be organized as follows.
/path/to/Point-PRC
|----data # placed in the same level of `trainers`, `weights`, etc.
|----base2new
|----modelnet40… See the full description on the dataset page: https://huggingface.co/datasets/auniquesun/Point-PRC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of classification accuracy of proposed defenses with other defense strategies, under various attacks on DGCNN and ModelNet40 datasets.
The dataset used for point cloud classification and segmentation tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
With the advancement of sensor technologies such as LiDAR and depth cameras, the significance of three-dimensional point cloud data in autonomous driving and environment sensing continues to increase.Point cloud registration stands as a fundamental task in constructing high-precision environmental models, with particular significance in overlapping regions where the accuracy of feature extraction and matching directly impacts registration quality. Despite advancements in deep learning approaches, existing methods continue to demonstrate limitations in extracting comprehensive features within these overlapping areas. This study introduces an innovative point cloud registration framework that synergistically combines the K-nearest neighbor (KNN) algorithm with a channel attention mechanism (CAM) to significantly enhance feature extraction and matching capabilities in overlapping regions. Additionally, by designing an effectiveness scoring network, the proposed method improves registration accuracy and enhances system robustness in complex scenarios. Comprehensive evaluations on the ModelNet40 dataset reveal that our approach achieves markedly superior performance metrics, demonstrating significantly lower root mean square error (RMSE) and mean absolute error (MAE) compared to established methods including iterative closest point (ICP), Robust & Efficient Point Cloud Registration using PointNet (PointNetLK), Go-ICP, fast global registration (FGR), deep closest point (DCP), self-supervised learning for a partial-to-partial registration (PRNet), and Iterative Distance-Aware Similarity Matrix Convolution (IDAM). This performance advantage is consistently maintained across various challenging conditions, including unseen shapes, novel categories, and noisy environments. Furthermore, additional experiments on the Stanford dataset validate the applicability and robustness of the proposed method for high-precision 3D shape registration tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Registration performance with noise on the Stanford dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Running time of different registration algorithms.
The ModelNet-40 dataset contains 2468 CAD models that correspond to 40 classes.