7 datasets found
  1. h

    GSO-SAD

    • huggingface.co
    Updated Sep 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yulin wang (2024). GSO-SAD [Dataset]. https://huggingface.co/datasets/SEU-WYL/GSO-SAD
    Explore at:
    Dataset updated
    Sep 22, 2024
    Authors
    yulin wang
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    the project's GitHub repository: https://github.com/WangYuLin-SEU/KASAL

      Google Scanned Objects (GSO) Symmetry Axis Dataset
    
    
    
    
    
      1. Dataset Description
    

    This dataset is an extension of the Google Scanned Objects (GSO) dataset, enriched with symmetry axis annotations for each object. It is designed to assist in pose estimation tasks by providing explicit symmetry information for objects with both geometric and texture symmetries.

      Key Features:
    

    Objects: 3D scanned… See the full description on the dataset page: https://huggingface.co/datasets/SEU-WYL/GSO-SAD.

  2. Z

    Parcel3D - A Synthetic Dataset of Damaged and Intact Parcel Images with 2D...

    • data.niaid.nih.gov
    • resodate.org
    • +2more
    Updated Jul 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Naumann, Alexander; Hertlein, Felix; Dörr, Laura; Furmans, Kai (2023). Parcel3D - A Synthetic Dataset of Damaged and Intact Parcel Images with 2D and 3D Annotations [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8032203
    Explore at:
    Dataset updated
    Jul 13, 2023
    Dataset provided by
    Karlsruhe Institute of Technology
    FZI Research Center for Information Technology
    Authors
    Naumann, Alexander; Hertlein, Felix; Dörr, Laura; Furmans, Kai
    Description

    Synthetic dataset of over 13,000 images of damaged and intact parcels with full 2D and 3D annotations in the COCO format. For details see our paper and for visual samples our project page.

    Relevant computer vision tasks:

    bounding box detection

    classification

    instance segmentation

    keypoint estimation

    3D bounding box estimation

    3D voxel reconstruction

    3D reconstruction

    The dataset is for academic research use only, since it uses resources with restrictive licenses. For a detailed description of how the resources are used, we refer to our paper and project page.

    Licenses of the resources in detail:

    Google Scanned Objects: CC BY 4.0 (for details on which files are used, see the respective meta folder)

    Cardboard Dataset: CC BY 4.0

    Shipping Label Dataset: CC BY-NC 4.0

    Other Labels: See file misc/source_urls.json

    LDR Dataset: License for Non-Commercial Use

    Large Logo Dataset (LLD): Please notice that this dataset is made available for academic research purposes only. All the images are collected from the Internet, and the copyright belongs to the original owners. If any of the images belongs to you and you would like it removed, please kindly inform us, we will remove it from our dataset immediately.

    You can use our textureless models (i.e. the obj files) of damaged parcels under CC BY 4.0 (note that this does not apply to the textures).

    If you use this resource for scientific research, please consider citing

    @inproceedings{naumannParcel3DShapeReconstruction2023, author = {Naumann, Alexander and Hertlein, Felix and D"orr, Laura and Furmans, Kai}, title = {Parcel3D: Shape Reconstruction From Single RGB Images for Applications in Transportation Logistics}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4402-4412} }

  3. h

    Edit3D-Bench

    • huggingface.co
    Updated Aug 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Link·Mercer (2025). Edit3D-Bench [Dataset]. https://huggingface.co/datasets/linknoise/Edit3D-Bench
    Explore at:
    Dataset updated
    Aug 26, 2025
    Authors
    Link·Mercer
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Edit3D-Bench

    Paper | Project Page | Code Edit3D-Bench is a benchmark for 3D editing evaluation, introduced in the paper VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space. This dataset comprises 100 high-quality 3D models, with 50 selected from Google Scanned Objects (GSO) and 50 from PartObjaverse-Tiny. For each model, we provide 3 distinct editing prompts. Each prompt is accompanied by a complete set of annotated 3D assets, including

    original 3D asset… See the full description on the dataset page: https://huggingface.co/datasets/linknoise/Edit3D-Bench.

  4. ULB SauceDino

    • zenodo.org
    • data-staging.niaid.nih.gov
    • +1more
    zip
    Updated Dec 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Armand Losfeld; Armand Losfeld; Laurie Van Bogaert; Laurie Van Bogaert; Gauthier Lafruit; Gauthier Lafruit; Mehrdad Teratani; Mehrdad Teratani (2023). ULB SauceDino [Dataset]. http://doi.org/10.5281/zenodo.7950729
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 6, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Armand Losfeld; Armand Losfeld; Laurie Van Bogaert; Laurie Van Bogaert; Gauthier Lafruit; Gauthier Lafruit; Mehrdad Teratani; Mehrdad Teratani
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ULB SauceDino synthetic light field dataset by LISA ULB

    The synthetic light field dataset "ULB SauceDino" [1] is provided by Armand Losfeld, Laurie Van Bogaert, Gauthier Lafruit, Mehrdad Teratani, members of the LISA department, EPB (Ecole Polytechnique de Bruxelles), ULB (Université Libre de Bruxelles), Belgium.

    Content

    This dataset is used for the performance evaluation of tensor displays, but any other applications using a light field (or multi-views) as input can use it. It is composed of 3 scanned objects from the collection Scanned Objects by Google Research [2], two user-modeled objects, and a brick-texture background. Two pre-rendered light fields are provided. If other light field configurations are needed, please install Blender ($\geq$ 3.8.0) and the light field blender add-on (link: https://github.com/dbonattoj/blender-addon).

    Please find a detailed description of the content of each file in the following sections.

    9x9_wide_FOV.zip

    This compressed file contains 9 horizontal by 9 vertical viewpoints for a total of 81 viewpoints. Each viewpoint contains 512x512 pixels and is stored in the PNG format with 8 bits for each channel. The camera parameters, cf. parameters.cfg, are set to have a disparity of -3 in the background, a disparity of 0 in the dinosaur body, and a disparity of 3 in the letter cube. This light field is generally used to challenge the evaluated method due to its high-parallax and high-texture objects.

    Note that, the viewpoint called Cam000 is always the top-left viewpoint of the light field. The next viewpoint Cam001 is the adjacent viewpoint on the right. So, the (N-1)-th viewpoint (here 80) is the bottom-right viewpoint.

    15x15.zip

    This compressed file contains 15 horizontal by 15 vertical viewpoints for a total of 125 viewpoints. Each viewpoint contains 512x300 pixels and is stored in the PNG format with 8 bits for each channel. The camera parameters, cf. parameters.cfg, are set to have a disparity of -1 in the background, a disparity of 0 in the dinosaur body, and a disparity of 1 in the letter cube. Due to its low parallax, this light field is easier to reproduce than the previous one.

    Note that, the viewpoint called Cam000 is always the top-left viewpoint of the light field. The next viewpoint Cam001 is the adjacent viewpoint on the right. So, the (N-1)-th viewpoint (here 224) is the bottom-right viewpoint.

    BLENDER_SCENE.zip

    This compressed file contains all the files needed to load the scene in Blender. With this, it is possible to render any new light fields of the same scene and even change the complexity of the scene. Note that, the cameras used for the previous light field rendering are already present. We only recommend the modification of the camera parameters instead of their position since only the adjustable camera parameters can be exported with the Blender add-on in the parameters.cfg file.

    License

    CC BY-NC-SA

    Terms of Use

    Any kind of publication or report using this dataset should refer to the references below.

    References

    [1] Armand Losfeld, Laurie Van Bogaert, Gauthier Lafruit, Mehrdad Teratani,"ULB SauceDino", 2023.

    @misc{losfeld_saucedino_2023, title = {{ULB} {SauceDino}}, author = {Losfeld, Armand and Van Bogaert, Laurie and Lafruit, Gauthier and Teratani, Mehrdad}, month = may, year = {2023}, doi = {10.5281/zenodo.7950729} }

    [2] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh, and V. Vanhoucke, "Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items," 2022. Available: https://arxiv.org/abs/2204.11918

    @misc{downs2022google, title={Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items}, author={Laura Downs and Anthony Francis and Nate Koenig and Brandon Kinman and Ryan Hickman and Krista Reymann and Thomas B. McHugh and Vincent Vanhoucke}, year={2022}, eprint={2204.11918}, archivePrefix={arXiv}, primaryClass={cs.RO} }

  5. h

    Edit3D-Bench

    • huggingface.co
    Updated Aug 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    zehuan-huang (2025). Edit3D-Bench [Dataset]. https://huggingface.co/datasets/huanngzh/Edit3D-Bench
    Explore at:
    Dataset updated
    Aug 26, 2025
    Authors
    zehuan-huang
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Edit3D-Bench

    Paper | Project Page | Code Edit3D-Bench is a benchmark for 3D editing evaluation, introduced in the paper VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space. This dataset comprises 100 high-quality 3D models, with 50 selected from Google Scanned Objects (GSO) and 50 from PartObjaverse-Tiny. For each model, we provide 3 distinct editing prompts. Each prompt is accompanied by a complete set of annotated 3D assets, including

    original 3D asset… See the full description on the dataset page: https://huggingface.co/datasets/huanngzh/Edit3D-Bench.

  6. Petra Treasury - Made with random google images.

    • zenodo.org
    bin, jpeg
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    shacharweis; shacharweis (2024). Petra Treasury - Made with random google images. [Dataset]. http://doi.org/10.5281/zenodo.10329654
    Explore at:
    jpeg, binAvailable download formats
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    shacharweis; shacharweis
    License

    Attribution 1.0 (CC BY 1.0)https://creativecommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    This scan was reconstructed using random images found online. More info here:

    https://packet39.com/blog/2019/01/09/how-i-3d-scanned-the-treasury-at-petra-without-leaving-home/

    Source: Objaverse 1.0 / Sketchfab

  7. d

    Data from: X-ray CT data with semantic annotations for the paper "A workflow...

    • catalog.data.gov
    • datasetcatalog.nlm.nih.gov
    • +1more
    Updated Dec 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). X-ray CT data with semantic annotations for the paper "A workflow for segmenting soil and plant X-ray CT images with deep learning in Google’s Colaboratory" [Dataset]. https://catalog.data.gov/dataset/x-ray-ct-data-with-semantic-annotations-for-the-paper-a-workflow-for-segmenting-soil-and-p-d195a
    Explore at:
    Dataset updated
    Dec 2, 2025
    Dataset provided by
    Agricultural Research Service
    Description

    Leaves from genetically unique Juglans regia plants were scanned using X-ray micro-computed tomography (microCT) on the X-ray μCT beamline (8.3.2) at the Advanced Light Source (ALS) in Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA USA). Soil samples were collected in Fall of 2017 from the riparian oak forest located at the Russell Ranch Sustainable Agricultural Institute at the University of California Davis. The soil was sieved through a 2 mm mesh and was air dried before imaging. A single soil aggregate was scanned at 23 keV using the 10x objective lens with a pixel resolution of 650 nanometers on beamline 8.3.2 at the ALS. Additionally, a drought stressed almond flower bud (Prunus dulcis) from a plant housed at the University of California, Davis, was scanned using a 4x lens with a pixel resolution of 1.72 µm on beamline 8.3.2 at the ALS Raw tomographic image data was reconstructed using TomoPy. Reconstructions were converted to 8-bit tif or png format using ImageJ or the PIL package in Python before further processing. Images were annotated using Intel’s Computer Vision Annotation Tool (CVAT) and ImageJ. Both CVAT and ImageJ are free to use and open source. Leaf images were annotated in following Théroux-Rancourt et al. (2020). Specifically, Hand labeling was done directly in ImageJ by drawing around each tissue; with 5 images annotated per leaf. Care was taken to cover a range of anatomical variation to help improve the generalizability of the models to other leaves. All slices were labeled by Dr. Mina Momayyezi and Fiona Duong.To annotate the flower bud and soil aggregate, images were imported into CVAT. The exterior border of the bud (i.e. bud scales) and flower were annotated in CVAT and exported as masks. Similarly, the exterior of the soil aggregate and particulate organic matter identified by eye were annotated in CVAT and exported as masks. To annotate air spaces in both the bud and soil aggregate, images were imported into ImageJ. A gaussian blur was applied to the image to decrease noise and then the air space was segmented using thresholding. After applying the threshold, the selected air space region was converted to a binary image with white representing the air space and black representing everything else. This binary image was overlaid upon the original image and the air space within the flower bud and aggregate was selected using the “free hand” tool. Air space outside of the region of interest for both image sets was eliminated. The quality of the air space annotation was then visually inspected for accuracy against the underlying original image; incomplete annotations were corrected using the brush or pencil tool to paint missing air space white and incorrectly identified air space black. Once the annotation was satisfactorily corrected, the binary image of the air space was saved. Finally, the annotations of the bud and flower or aggregate and organic matter were opened in ImageJ and the associated air space mask was overlaid on top of them forming a three-layer mask suitable for training the fully convolutional network. All labeling of the soil aggregate and soil aggregate images was done by Dr. Devin Rippner. These images and annotations are for training deep learning models to identify different constituents in leaves, almond buds, and soil aggregates Limitations: For the walnut leaves, some tissues (stomata, etc.) are not labeled and only represent a small portion of a full leaf. Similarly, both the almond bud and the aggregate represent just one single sample of each. The bud tissues are only divided up into buds scales, flower, and air space. Many other tissues remain unlabeled. For the soil aggregate annotated labels are done by eye with no actual chemical information. Therefore particulate organic matter identification may be incorrect. Resources in this dataset: Resource Title: Annotated X-ray CT images and masks of a Forest Soil Aggregate. File Name: forest_soil_images_masks_for_testing_training.zipResource Description: This aggregate was collected from the riparian oak forest at the Russell Ranch Sustainable Agricultural Facility. The aggreagate was scanned using X-ray micro-computed tomography (microCT) on the X-ray μCT beamline (8.3.2) at the Advanced Light Source (ALS) in Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA USA) using the 10x objective lens with a pixel resolution of 650 nanometers. For masks, the background has a value of 0,0,0; pores spaces have a value of 250,250, 250; mineral solids have a value= 128,0,0; and particulate organic matter has a value of = 000,128,000. These files were used for training a model to segment the forest soil aggregate and for testing the accuracy, precision, recall, and f1 score of the model. Resource Title: Annotated X-ray CT images and masks of an Almond bud (P. Dulcis). File Name: Almond_bud_tube_D_P6_training_testing_images_and_masks.zipResource Description: Drought stressed almond flower bud (Prunis dulcis) from a plant housed at the University of California, Davis, was scanned by X-ray micro-computed tomography (microCT) on the X-ray μCT beamline (8.3.2) at the Advanced Light Source (ALS) in Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA USA) using the 4x lens with a pixel resolution of 1.72 µm using. For masks, the background has a value of 0,0,0; air spaces have a value of 255,255, 255; bud scales have a value= 128,0,0; and flower tissues have a value of = 000,128,000. These files were used for training a model to segment the almond bud and for testing the accuracy, precision, recall, and f1 score of the model.Resource Software Recommended: Fiji (ImageJ),url: https://imagej.net/software/fiji/downloads Resource Title: Annotated X-ray CT images and masks of Walnut leaves (J. Regia) . File Name: 6_leaf_training_testing_images_and_masks_for_paper.zipResource Description: Stems were collected from genetically unique J. regia accessions at the 117 USDA-ARS-NCGR in Wolfskill Experimental Orchard, Winters, California USA to use as scion, and were grafted by Sierra Gold Nursery onto a commonly used commercial rootstock, RX1 (J. microcarpa × J. regia). We used a common rootstock to eliminate any own-root effects and to simulate conditions for a commercial walnut orchard setting, where rootstocks are commonly used. The grafted saplings were repotted and transferred to the Armstrong lathe house facility at the University of California, Davis in June 2019, and kept under natural light and temperature. Leaves from each accession and treatment were scanned using X-ray micro-computed tomography (microCT) on the X-ray μCT beamline (8.3.2) at the Advanced Light Source (ALS) in Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA USA) using the 10x objective lens with a pixel resolution of 650 nanometers. For masks, the background has a value of 170,170,170; Epidermis value= 85,85,85; Mesophyll value= 0,0,0; Bundle Sheath Extension value= 152,152,152; Vein value= 220,220,220; Air value = 255,255,255.Resource Software Recommended: Fiji (ImageJ),url: https://imagej.net/software/fiji/downloads

  8. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
yulin wang (2024). GSO-SAD [Dataset]. https://huggingface.co/datasets/SEU-WYL/GSO-SAD

GSO-SAD

SEU-WYL/GSO-SAD

Explore at:
22 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Sep 22, 2024
Authors
yulin wang
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

the project's GitHub repository: https://github.com/WangYuLin-SEU/KASAL

  Google Scanned Objects (GSO) Symmetry Axis Dataset





  1. Dataset Description

This dataset is an extension of the Google Scanned Objects (GSO) dataset, enriched with symmetry axis annotations for each object. It is designed to assist in pose estimation tasks by providing explicit symmetry information for objects with both geometric and texture symmetries.

  Key Features:

Objects: 3D scanned… See the full description on the dataset page: https://huggingface.co/datasets/SEU-WYL/GSO-SAD.

Search
Clear search
Close search
Google apps
Main menu