Facebook
TwitterThe goal of Princeton ModelNet project is to provide researchers in computer vision, computer graphics, robotics and cognitive science, with a comprehensive clean collection of 3D CAD models for objects.
ModelNet40 dataset contains 12,311 pre-aligned shapes from 40 categories, which are split into 9,843 (80%) for training and 2,468 (20%) for testing. The CAD models are in Object File Format (OFF). Matlab functions to read and visualize OFF files are provided in Princeton Vision Toolkit (PVT).
To build the core of the dataset, a list of the most common object categories in the world was compiled, using the statistics obtained from the SUN database. Once a vocabulary for objects was established, 3D CAD models belonging to each object category was collected using online search engines by querying for each object category term. Then, human workers on Amazon Mechanical Turk were hired to manually decide whether each CAD model belonged to the specified cateogries, using an in-house designed tool with quality control. To obtain a very clean dataset, 10 popular object categories were chosen while manually deleted the models that did not belong to these categories. Furthermore, manual alignment of orientation of CAD models was performed for the 10-class subset.
This dataset was obtained from Princeton ModelNet's official dataset homepage. For more details on the dataset refer the related publication - 3D ShapeNets: A Deep Representation for Volumetric Shapes. Work based on the dataset should cite:
@inproceedings{wu20153d,
title={3d shapenets: A deep representation for volumetric shapes},
author={Wu, Zhirong and Song, Shuran and Khosla, Aditya and Yu, Fisher and Zhang, Linguang and Tang, Xiaoou and Xiao, Jianxiong},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1912--1920},
year={2015}
}
All CAD models were downloaded from the Internet and the original authors hold the copyright of the CAD models. The labels of the data were obtained by authors via Amazon Mechanical Turk service and it is provided freely. This dataset is provided for the convenience of academic research only.
Banner Image Credits - From AnTao97's PointCloudDatasets repo [rendered with Mitsuba2]
Facebook
TwitterThe goal of Princeton ModelNet project is to provide researchers in computer vision, computer graphics, robotics and cognitive science, with a comprehensive clean collection of 3D CAD models for objects.
ModelNet10 dataset is a part of ModelNet40 dataset, containing 4,899 pre-aligned shapes from 10 categories. There are 3,991 (80%) shapes for training and 908 (20%) shapes for testing. The CAD models are in Object File Format (OFF). Matlab functions to read and visualize OFF files are provided in Princeton Vision Toolkit (PVT).
To build the core of the dataset, a list of the most common object categories in the world was compiled, using the statistics obtained from the SUN database. Once a vocabulary for objects was established, 3D CAD models belonging to each object category was collected using online search engines by querying for each object category term. Then, human workers on Amazon Mechanical Turk were hired to manually decide whether each CAD model belonged to the specified cateogries, using an in-house designed tool with quality control. To obtain a very clean dataset, 10 popular object categories were chosen while manually deleted the models that did not belong to these categories. Furthermore, manual alignment of orientation of CAD models was performed for the 10-class subset.
This dataset was obtained from Princeton ModelNet's official dataset homepage. For more details on the dataset refer the related publication - 3D ShapeNets: A Deep Representation for Volumetric Shapes. Work based on the dataset should cite:
@inproceedings{wu20153d,
title={3d shapenets: A deep representation for volumetric shapes},
author={Wu, Zhirong and Song, Shuran and Khosla, Aditya and Yu, Fisher and Zhang, Linguang and Tang, Xiaoou and Xiao, Jianxiong},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1912--1920},
year={2015}
}
All CAD models were downloaded from the Internet and the original authors hold the copyright of the CAD models. The labels of the data were obtained by authors via Amazon Mechanical Turk service and it is provided freely. This dataset is provided for the convenience of academic research only.
Banner Image Credits - From AnTao97's PointCloudDatasets repo [rendered with Mitsuba2]
Facebook
TwitterNASA Global Land Data Assimilation System Version 2 (GLDAS-2) has three components: GLDAS-2.0, GLDAS-2.1, and GLDAS-2.2. GLDAS-2.0 is forced entirely with the Princeton meteorological forcing input data and provides a temporally consistent series from 1948 through 2014. GLDAS-2.1 is forced with a combination of model and observation data from 2000 to present. GLDAS-2.2 product suites use data assimilation (DA), whereas the GLDAS-2.0 and GLDAS-2.1 products are "open-loop" (i.e., no data assimilation). The choice of forcing data, as well as DA observation source, variable, and scheme, vary for different GLDAS-2.2 products.
This data set, GLDAS-2.0 0.25 degree daily, contains a series of land surface parameters simulated from the Catchment Land Surface Model 3.6, and currently covers from January 1948 to December 2014.
The GLDAS-2.0 model simulations were initialized on January 1, 1948, using soil moisture and other state fields from the LSM climatology for that day of the year. The simulations were forced by the global meteorological forcing data set from Princeton University (Sheffield et al., 2006). Each simulation uses the common GLDAS data sets for land water mask (MOD44W: Carroll et al., 2009) and elevation (GTOPO30) along with the model default land cover and soils datasets. Catchment model uses the Mosaic land cover classification and soils, topographic, and other model-specific parameters were derived in a consistent manner as in the NASA/GMAO’s GEOS-5 climate modeling system. The MODIS based land surface parameters are used in the current GLDAS-2.0 and GLDAS-2.1 products.
The GLDAS-2.0 data are archived and distributed in netCDF format.
Facebook
TwitterGlobal Land Data Assimilation System Version 2 (hereafter, GLDAS-2) has two components: one forced entirely with the Princeton meteorological forcing data (hereafter, GLDAS-2.0), and the other forced with a combination of model and observation based forcing data sets (hereafter, GLDAS-2.1).
This data set, GLDAS-2.0 0.25 degree daily, contains a series of land surface parameters simulated from the Catchment Land Surface Model 3.3, currently covers from 1948 to 2014.
The GLDAS-2.0 model simulations were initialized on simulation date January 1, 1948, using soil moisture and other state fields from the LSM climatology for that day of the year. The simulations were forced by the global meteorological forcing data set from Princeton University (Sheffield et al., 2006). Each simulation uses the common GLDAS data sets for land water mask (MOD44W: Carroll et al., 2009) and elevation (GTOPO30) along with the model default land cover and soils datasets. Catchment model uses the Mosaic land cover classification and soils, topographic, and other model-specific parameters were derived in a consistent manner as in the NASA/GMAO’s GEOS-5 climate modeling system. The MODIS based land surface parameters are used in the current GLDAS-2.0 and GLDAS-2.1 products. For more information, please see the README Document.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThe goal of Princeton ModelNet project is to provide researchers in computer vision, computer graphics, robotics and cognitive science, with a comprehensive clean collection of 3D CAD models for objects.
ModelNet40 dataset contains 12,311 pre-aligned shapes from 40 categories, which are split into 9,843 (80%) for training and 2,468 (20%) for testing. The CAD models are in Object File Format (OFF). Matlab functions to read and visualize OFF files are provided in Princeton Vision Toolkit (PVT).
To build the core of the dataset, a list of the most common object categories in the world was compiled, using the statistics obtained from the SUN database. Once a vocabulary for objects was established, 3D CAD models belonging to each object category was collected using online search engines by querying for each object category term. Then, human workers on Amazon Mechanical Turk were hired to manually decide whether each CAD model belonged to the specified cateogries, using an in-house designed tool with quality control. To obtain a very clean dataset, 10 popular object categories were chosen while manually deleted the models that did not belong to these categories. Furthermore, manual alignment of orientation of CAD models was performed for the 10-class subset.
This dataset was obtained from Princeton ModelNet's official dataset homepage. For more details on the dataset refer the related publication - 3D ShapeNets: A Deep Representation for Volumetric Shapes. Work based on the dataset should cite:
@inproceedings{wu20153d,
title={3d shapenets: A deep representation for volumetric shapes},
author={Wu, Zhirong and Song, Shuran and Khosla, Aditya and Yu, Fisher and Zhang, Linguang and Tang, Xiaoou and Xiao, Jianxiong},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={1912--1920},
year={2015}
}
All CAD models were downloaded from the Internet and the original authors hold the copyright of the CAD models. The labels of the data were obtained by authors via Amazon Mechanical Turk service and it is provided freely. This dataset is provided for the convenience of academic research only.
Banner Image Credits - From AnTao97's PointCloudDatasets repo [rendered with Mitsuba2]