https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
This repository contains ShapeNetCore (v2), a subset of ShapeNet.ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
Please see DATA.md for details about the data.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeNetCore.
SeaLab/ShapeNet dataset hosted on Hugging Face and contributed by the HF Datasets community
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
This repository contains ShapeNetCore (v2) in GLB format, a subset of ShapeNet.ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/shapenetcore-glb.
A dataset for 3D shape generation, using the five shape categories selected from ShapeNet Core V1
The ModelNet40 zero-shot 3D classification performance of models pretrained on ShapeNet only.
This dataset was created by Jeremy26
The dataset used in the paper is ShapeNet, a large-scale 3D shape dataset, and ModelNet40, a dataset for 3D object classification.
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
This repository contains archives (zip files) for ShapeNetSem, a subset of ShapeNet richly annotated with physical attributes. Please see DATA.md for details about the data. If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions. If you use this data, please cite the main ShapeNet technical report and the… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeNetSem-archive.
This dataset was generated for the paper: "Adversarial examples within the training distribution: A widespread challenge" using our custom computer graphics pipeline. The paper can be accessed here: https://arxiv.org/abs/2106.16198 and the code used to generate this dataset can be found here: https://github.com/Spandan-Madan/in_distribution_adversarial_examples
A large-scale 3D model repository containing over 16,000 3D models.
This dataset was created by guxue17
ShapeNetCore is a subset of the full ShapeNet dataset with single clean 3D models and manually verified category and alignment annotations. It covers 55 common object categories with about 51,300 unique 3D models. The 12 object categories of PASCAL 3D+, a popular computer vision 3D benchmark dataset, are all covered by ShapeNetCore.
We present a dataset of 3D CAD models (.stl) from the field of mechanical engineering. There are 7 core classes (cover, flange, housing, mounting, rodprobe, sensor, tube) and 5 additional classes (cableconnector, dismiss, diverse, fork, funnelantenna). The dataset has been hand labelled with categories.These models are for demonstration purpose only and do not reflect actual products
https://shapenet.org/termshttps://shapenet.org/terms
ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes).
The synthetic ShapeNet intrinsic image decomposition dataset used for training the deep CNN models IntrinsicNet and RetiNet of CVPR2018. See Section 4.1 of the paper for details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of mean 3D IoU score with the baseline reconstruction methods on two categories of ShapeNet datasets.
This dataset was created by Sirish99
It contains the following files:
The synthetic ShapeNet intrinsic image decomposition dataset of 90,000 images. 50,000 of them were used for training the deep CNN models of CVIU'2021 - see Section 4 of the paper. This is the extension of the first release of the synthetic ShapeNet intrinsic image decomposition dataset of 20,000 images used for training the deep CNN models IntrinsicNet and RetiNet of CVPR'2018. See Section 4.1 of the CVPR paper for the details of the data rendering. Similar to the initial dataset, both albedo and shading ground-truth images were in HDR, and later normalized to [0,1] using min-max. Then, the composite RGB image was created by element-wise multiplying the related albedo and shading ground truths. - albedo -> albedo (reflectance) ground-truth images [8bits] - shading -> gray-scale shading (illumination) ground-truth images [8bits] - mask -> object masks [binary] - composite -> composite RGB image (albedo x shading) [16bits] - shading_prior_initial -> initial sparse shading estimations (see Section 3.3 of the paper) [8bits] - shading_prior_filled -> dense shading map reconstruction (see Section 3.4 of the paper) [16bits] shading_prior_filled folder is split into two parts (shading_prior_filled.z01 and shading_prior_filled.zip). To extract them, you need to unzip. If you are not sure how, check this link https://superuser.com/a/336224 Note that the prefixes of the file names (ambient, test_not_used and with_normals) do not indicate anything extra.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The convolutional neural networks (CNNs) are a powerful tool of image classification that has been widely adopted in applications of automated scene segmentation and identification. However, the mechanisms underlying CNN image classification remain to be elucidated. In this study, we developed a new approach to address this issue by investigating transfer of learning in representative CNNs (AlexNet, VGG, ResNet-101, and Inception-ResNet-v2) on classifying geometric shapes based on local/global features or invariants. While the local features are based on simple components, such as orientation of line segment or whether two lines are parallel, the global features are based on the whole object such as whether an object has a hole or whether an object is inside of another object. Six experiments were conducted to test two hypotheses on CNN shape classification. The first hypothesis is that transfer of learning based on local features is higher than transfer of learning based on global features. The second hypothesis is that the CNNs with more layers and advanced architectures have higher transfer of learning based global features. The first two experiments examined how the CNNs transferred learning of discriminating local features (square, rectangle, trapezoid, and parallelogram). The other four experiments examined how the CNNs transferred learning of discriminating global features (presence of a hole, connectivity, and inside/outside relationship). While the CNNs exhibited robust learning on classifying shapes, transfer of learning varied from task to task, and model to model. The results rejected both hypotheses. First, some CNNs exhibited lower transfer of learning based on local features than that based on global features. Second the advanced CNNs exhibited lower transfer of learning on global features than that of the earlier models. Among the tested geometric features, we found that learning of discriminating inside/outside relationship was the most difficult to be transferred, indicating an effective benchmark to develop future CNNs. In contrast to the “ImageNet” approach that employs natural images to train and analyze the CNNs, the results show proof of concept for the “ShapeNet” approach that employs well-defined geometric shapes to elucidate the strengths and limitations of the computation in CNN image classification. This “ShapeNet” approach will also provide insights into understanding visual information processing the primate visual systems.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data were collected in the custom simulator that loads random graspable objects from the ShapeNet dataset and random table. The graspable object is placed above the table in a random position. Then, the scene is simulated using the PhysX engine to make sure that the scene is physically plausible. The simulator captures images of the scene from a random pose and then takes the second image from the camera pose that is on the opposite side of the scene. It contains RGB, depth, segmentation images of the scenes and objects on the scene and information about the camera poses that can be used to create a full 3D model of the scene and develop methods that reconstruct objects from a single RGB-D camera view. Part A of the dataset contains four categories: a helmet, a jar, a laptop, and a mug.
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
This repository contains ShapeNetCore (v2), a subset of ShapeNet.ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
Please see DATA.md for details about the data.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that… See the full description on the dataset page: https://huggingface.co/datasets/ShapeNet/ShapeNetCore.