40 datasets found
  1. HappyWhales LabelMe Segmentation Dataset

    • kaggle.com
    zip
    Updated Feb 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shubham (2022). HappyWhales LabelMe Segmentation Dataset [Dataset]. https://www.kaggle.com/datasets/shubhambaid/happywhales-labelme-segmentation-dataset
    Explore at:
    zip(158215233 bytes)Available download formats
    Dataset updated
    Feb 14, 2022
    Authors
    Shubham
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by Shubham

    Released under CC0: Public Domain

    Contents

  2. R

    Data from: Labelme Dataset

    • universe.roboflow.com
    zip
    Updated Aug 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nciacrabs (2023). Labelme Dataset [Dataset]. https://universe.roboflow.com/nciacrabs/labelme-xt0xd/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 31, 2023
    Dataset authored and provided by
    nciacrabs
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Styles Bounding Boxes
    Description

    Labelme

    ## Overview
    
    Labelme is a dataset for object detection tasks - it contains Styles annotations for 675 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. R

    Nut & Screw Label (labelme) Dataset

    • universe.roboflow.com
    zip
    Updated Apr 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Edge Computing Workspace (2023). Nut & Screw Label (labelme) Dataset [Dataset]. https://universe.roboflow.com/edge-computing-workspace/nut-screw-label-labelme-x1sza
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2023
    Dataset authored and provided by
    Edge Computing Workspace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Nut Screw Polygons
    Description

    Nut & Screw Label (Labelme)

    ## Overview
    
    Nut & Screw Label (Labelme) is a dataset for instance segmentation tasks - it contains Nut Screw annotations for 415 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. LabelMe - Let's Eat! Labeled images of meals

    • kaggle.com
    zip
    Updated Nov 29, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jack Cosgrove (2017). LabelMe - Let's Eat! Labeled images of meals [Dataset]. https://www.kaggle.com/jackcosgrove/labelme-lets-eat
    Explore at:
    zip(1231461 bytes)Available download formats
    Dataset updated
    Nov 29, 2017
    Authors
    Jack Cosgrove
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Context

    The LabelMe project has been run out of MIT for many years, and allows users to upload and annotate images. Since the labels are crowdsourced, they can be of poor quality. I have been proofreading these labels for several months, correcting spelling mistakes and coalescing similar labels into a single label when possible. I have also rejected many labels that did not seem to make sense.

    Content

    The images in the LabelMe project as well as the raw metadata were downloaded from MIT servers. All data is in the public domain. Images within LabelMe may have been taken as far back as the early 2000s, and run up to the present day.

    I have worked through 5% of the LabelMe dataset thus far. I decided to create a dataset pertaining to meals (labels such as plate, glass, napkins, fork, etc.) since there were a fair number of those in the 5% I have curated thus far. Most of the images in this dataset are of table settings.

    This dataset contains: 596 unique images 2734 labeled shapes outlining objects in these images 1782 labeled image grids, with a single number representing which portion of a grid cell is filled with a labeled object

    Acknowledgements

    Many thanks to the people of the LabelMe project!

    Inspiration

    I want to see how valuable my curation efforts have been for the LabelMe dataset. I would like to see others build object recognition models using this dataset.

  5. R

    Sketch2aia New Labelme Dataset

    • universe.roboflow.com
    zip
    Updated Mar 24, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Baulé (2022). Sketch2aia New Labelme Dataset [Dataset]. https://universe.roboflow.com/daniel-baule/sketch2aia---new-dataset---labelme/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 24, 2022
    Dataset authored and provided by
    Daniel Baulé
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    GUI Components In Sketches Bounding Boxes
    Description

    Sketch2aia New Dataset Labelme

    ## Overview
    
    Sketch2aia  New Dataset  Labelme is a dataset for object detection tasks - it contains GUI Components In Sketches annotations for 402 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. labelme-data

    • kaggle.com
    zip
    Updated Mar 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    work bidit (2024). labelme-data [Dataset]. https://www.kaggle.com/datasets/workbidit/labelme-data
    Explore at:
    zip(119853923 bytes)Available download formats
    Dataset updated
    Mar 26, 2024
    Authors
    work bidit
    Description

    Dataset

    This dataset was created by work bidit

    Contents

  7. c

    LABELME github.com/wkentaro/LABELME Price Prediction Data

    • coinbase.com
    Updated Nov 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). LABELME github.com/wkentaro/LABELME Price Prediction Data [Dataset]. https://www.coinbase.com/en-sg/price-prediction/base-labelme-githubcomwkentarolabelme-d27d
    Explore at:
    Dataset updated
    Nov 5, 2025
    Variables measured
    Growth Rate, Predicted Price
    Measurement technique
    User-defined projections based on compound growth. This is not a formal financial forecast.
    Description

    This dataset contains the predicted prices of the asset LABELME github.com/wkentaro/LABELME over the next 16 years. This data is calculated initially using a default 5 percent annual growth rate, and after page load, it features a sliding scale component where the user can then further adjust the growth rate to their own positive or negative projections. The maximum positive adjustable growth rate is 100 percent, and the minimum adjustable growth rate is -100 percent.

  8. Z

    Strawberry dataset for Semantic Segmentation

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Machado, Pedro (2022). Strawberry dataset for Semantic Segmentation [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6656331
    Explore at:
    Dataset updated
    Jun 18, 2022
    Dataset provided by
    Nottingham Trent University
    Authors
    Machado, Pedro
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was annotated using the labelme tool, and it was trained using the pixellib.

  9. R

    Own Drone Ss Dataset

    • universe.roboflow.com
    zip
    Updated Jan 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    labelme dataset (2024). Own Drone Ss Dataset [Dataset]. https://universe.roboflow.com/labelme-dataset-l94mv/own-drone-ss/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 7, 2024
    Dataset authored and provided by
    labelme dataset
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Human.. Masks
    Description

    Own Drone Ss

    ## Overview
    
    Own Drone Ss is a dataset for semantic segmentation tasks - it contains Human.. annotations for 1,100 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. Z

    A Semantically Annotated 15-Class Ground Truth Dataset for Substation...

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gomes, Andreas (2023). A Semantically Annotated 15-Class Ground Truth Dataset for Substation Equipment [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7884269
    Explore at:
    Dataset updated
    May 5, 2023
    Dataset provided by
    Graduate Program in Energy Systems, Universidade Tecnológica Federal do Paraná (UTFPR)
    Authors
    Gomes, Andreas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains 1660 images of electric substations with 50705 annotated objects. The images were obtained using different cameras, including cameras mounted on Autonomous Guided Vehicles (AGVs), fixed location cameras and those captured by humans using a variety of cameras. A total of 15 classes of objects were identified in this dataset, and the number of instances for each class is provided in the following table:

    Object classes and how many times they appear in the dataset.
    
    
        Class
        Instances
    
    
    
    
        Open blade disconnect
        310
    
    
        Closed blade disconnect switch
        5243
    
    
        Open tandem disconnect switch
        1599
    
    
        Closed tandem disconnect switch
        966
    
    
        Breaker
        980
    
    
        Fuse disconnect switch
        355
    
    
        Glass disc insulator
        3185
    
    
        Porcelain pin insulator
        26499
    
    
        Muffle
        1354
    
    
        Lightning arrester
        1976
    
    
        Recloser
        2331
    
    
        Power transformer
        768
    
    
        Current transformer
        2136
    
    
        Potential transformer
        654
    
    
        Tripolar disconnect switch
        2349
    

    All images in this dataset were collected from a single electrical distribution substation in Brazil over a period of two years. The images were captured at various times of the day and under different weather and seasonal conditions, ensuring a diverse range of lighting conditions for the depicted objects. A team of experts in Electrical Engineering curated all the images to ensure that the angles and distances depicted in the images are suitable for automating inspections in an electrical substation.

    The file structure of this dataset contains the following directories and files:

    images: This directory contains 1660 electrical substation images in JPEG format.

    images: This directory contains 1660 electrical substation images in JPEG format.

    labels_json: This directory contains JSON files annotated in the VOC-style polygonal format. Each file shares the same filename as its respective image in the images directory.

    15_masks: This directory contains PNG segmentation masks for all 15 classes, including the porcelain pin insulator class. Each file shares the same name as its corresponding image in the images directory.

    14_masks: This directory contains PNG segmentation masks for all classes except the porcelain pin insulator. Each file shares the same name as its corresponding image in the images directory.

    porcelain_masks: This directory contains PNG segmentation masks for the porcelain pin insulator class. Each file shares the same name as its corresponding image in the images directory.

    classes.txt: This text file lists the 15 classes plus the background class used in LabelMe.

    json2png.py: This Python script can be used to generate segmentation masks using the VOC-style polygonal JSON annotations.

    The dataset aims to support the development of computer vision techniques and deep learning algorithms for automating the inspection process of electrical substations. The dataset is expected to be useful for researchers, practitioners, and engineers interested in developing and testing object detection and segmentation models for automating inspection and maintenance activities in electrical substations.

    The authors would like to thank UTFPR for the support and infrastructure made available for the development of this research and COPEL-DIS for the support through project PD-2866-0528/2020—Development of a Methodology for Automatic Analysis of Thermal Images. We also would like to express our deepest appreciation to the team of annotators who worked diligently to produce the semantic labels for our dataset. Their hard work, dedication and attention to detail were critical to the success of this project.

  11. LabelMe 12 50k

    • kaggle.com
    zip
    Updated Jul 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Darien Schettler (2022). LabelMe 12 50k [Dataset]. https://www.kaggle.com/datasets/dschettler8845/labelme-12-50k/discussion
    Explore at:
    zip(512862706 bytes)Available download formats
    Dataset updated
    Jul 15, 2022
    Authors
    Darien Schettler
    Description

    Initial Author Description

    The LabelMe-12-50k dataset consists of 50,000 JPEG images (40,000 for training and 10,000 for testing), which were extracted from LabelMe [1]. Each image is 256x256 pixels in size. 50% of the images in the training and testing set show a centere object, each belonging to one of the 12 object classes shown in Table 1. The remaining 50% show a randomly selected region of a randomly selected image ("clutter").

    The dataset is a quite difficult challenge for object recognition systems because the instances of each object class vary greatly in appearance, lighting conditions, and angles of view. Furthermore, centred objects may be partly occluded or other objects (or parts of them) may be present in the image. See [1] for a more detailed descripton o of the dataset.

    Table 1: Object Classes and number of instances in the LabelMe-12-50k dataset

    #Object classInstances in
    training set
    Instances in
    testing set
    1person4,8851,180
    2car3,829974
    3building2,085531
    4window4,0971,028
    5tree1,846494
    6sign954249
    7door830178
    8bookshelf391100
    9chair38588
    10table19254
    11keyboard32475
    12head21249
    clutter20,0005,000
    total number of images40,00010,000


    Annotation Format:

    The dataset archive contains annotation files in two formats:

    Human-readable text files (annotation-train.txt and annotation-test.txt), which contain in each line an image file name (without the .jpg extension) and 12 class labels corresponding to the 12 object classes. Binary files (annotation-train.bin and annotation-test.bin), which contain 12 successive 32-bit float values for each image, each value representing the class label of the corresponding class. The file does not contain any meta information (e.g., there is no header). The annotation label values of the two file formats differ slightly because the values in the text files are rounded to the second decimal place. If you want to report recognition rates, you should use the binary annotation files for training and testing because of the more precise label values.

    All label values are between -1.0 and 1.0. For the 50% of non-clutter images, the label of the depicted object is set to 1.0. As instances of other object classes may also be present in the image (in object images as well as in clutter images), the other labels either have a value of -1.0 or a value between 0.0 and 1.0. A value of -1.0 is set either if no instance of the object class is present in the image or if the level of overlapping (calculated by the size and position of the object's bounding box) is below a certain threshold. Values above 0.0 are assigned if this threshold is exceeded. A value of 1.0 means that the corresponding object is exactly centered in the image and 160 pixels in size (in its larger dimension), just like the extracted objects.


    Recognition Rates:

    Currently, the only results shown in Table 2 are from our paper [1]. If you would like to report recognition rates, please send them to uetz at ais.uni-bonn.de, including a link to your publication or a description of the method you used.

    Table 2: Training and testing error rates on the LabelMe-12-50k dataset

    Method usedTraining error rateTesting error rateReported by...
    Locally-connected Neural Pyramid3.77%16.27%Uetz and Behnke 2009 [1]


    **Initial Author Citations: **

    If you refer to the dataset, please cite:

    [1] Rafael Uetz and Sven Behnke, "Large-scale Object Recognition with CUDA-accelerated Hierarchical Neural Networks," Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems 2009 (ICIS 2009) [Download PDF] References:

    [2] B.C. Russell, A. Torralba, K.P. Murphy, W.T. Freeman, "LabelMe: A database and web-based tool for image annotation," International Journal of Computer Vision, vol. 77, no. 1-3, pp. 157-173, 2008

  12. Multiclass Weeds Dataset for Image Segmentation

    • figshare.com
    zip
    Updated Nov 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivam Yadav; Sanjay Soni; Sanjay Gupta (2023). Multiclass Weeds Dataset for Image Segmentation [Dataset]. http://doi.org/10.6084/m9.figshare.22643434.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 15, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Shivam Yadav; Sanjay Soni; Sanjay Gupta
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Multiclass Weeds Dataset for Image Segmentation comprises two species of weeds: Soliva Sessilis (Field Burrweed) and Thlaspi Arvense L. (Field Pennycress). Weed images were acquired during the early growth stage under field conditions in a brinjal farm located in Gorakhpur, Uttar Pradesh, India. The dataset contains 7872 augmented images and corresponding masks. Images were captured using various smartphone cameras and stored in RGB color format in JPEG format. The captured images were labeled using the labelme tool to generate segmented masks. Subsequently, the dataset was augmented to generate the final dataset.

  13. Heat Cost Allocator Dataset for the Reconcycle Project

    • zenodo.org
    • data.europa.eu
    zip
    Updated Apr 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastian Ruiz; Sebastian Ruiz (2023). Heat Cost Allocator Dataset for the Reconcycle Project [Dataset]. http://doi.org/10.5281/zenodo.7323671
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 11, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sebastian Ruiz; Sebastian Ruiz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset consists of images of the Kalo 1.5 heat cost allocator (HCA) and the Qundis HCA. The dataset has been created for the Reconcycle project. Find information at reconcycle.eu. The objects are positioned in different areas of the Reconcycle workcell, designed by JSI.

    The dataset has the following properties:

    • 1577 images with resolution: 1450x1450 pixels (Basler camera )

    • 57 images with resolution: 848 x 480 pixels (Realsense D435 camera)

    • The images have segmentation annotations labelled using the labelme software.

    • The original labelme annotations are present and exported to COCO dataset format.

    • The annotations are in the form of polygon segmentations.

    • The included COCO train/test split is a 90/10 split.

    The images have been annotated with the following labels:

    • hca_front

    • hca_back

    • hca_side1

    • hca_side2

    • battery

    • pcb

    • internals

    • pcb_covered

    • plastic_clip

  14. R

    Divyanshdixit0902 Dataset

    • universe.roboflow.com
    zip
    Updated Jun 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    divyansh labelme (2025). Divyanshdixit0902 Dataset [Dataset]. https://universe.roboflow.com/divyansh-labelme/divyanshdixit0902/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 29, 2025
    Dataset authored and provided by
    divyansh labelme
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Seat Bounding Boxes
    Description

    Divyanshdixit0902

    ## Overview
    
    Divyanshdixit0902 is a dataset for object detection tasks - it contains Seat annotations for 1,264 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  15. R

    Lt Dataset

    • universe.roboflow.com
    zip
    Updated Dec 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    labelme (2022). Lt Dataset [Dataset]. https://universe.roboflow.com/labelme-kvnpv/lt-8nqua/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 1, 2022
    Dataset authored and provided by
    labelme
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Lt Bounding Boxes
    Description

    Lt

    ## Overview
    
    Lt is a dataset for object detection tasks - it contains Lt annotations for 229 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  16. m

    Wire Rope Dataset

    • data.mendeley.com
    • ieee-dataport.org
    Updated Jan 10, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kuosheng Jiang (2023). Wire Rope Dataset [Dataset]. http://doi.org/10.17632/yzd5w5w833.1
    Explore at:
    Dataset updated
    Jan 10, 2023
    Authors
    Kuosheng Jiang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We use industrial cameras to take images of steel wire ropes under different conditions. We put the images of steel wire ropes in five folders, named as:Camera position step up_1; Camera position step up_2; From dark to light; Rotate(360 degrees); Rotate(360 degrees). Images in different folders come from different sources, explained below: Camera position step up_1:Move the camera from bottom to top to obtain images of different positions of the wire rope. Camera position step up_2: The camera rotates at a certain angle with the wire rope as the axis and then moves the camera from bottom to top to obtain images of different positions of the wire rope. From dark to light:Adjust the brightness of the light source to obtain the images of the wire rope under different brightness. Rotate(360 degrees): Rotate the wire rope 360 degrees and randomly take images of the wire rope at different angles. Rotation(free):Apply a certain torque to both ends of the wire rope and then suddenly remove the torque applied to both ends of the wire rope, and randomly take images during the rotation of the wire rope.In addition, the dataset also provides the json file generated manually using labelme. Note: If the network model fails to be trained using the json file, you can consider converting the Chinese in the json file to English. Finally, we also provide dataset usage instructions in the wire rope dataset folder.

  17. Z

    Truck Image Dataset

    • data-staging.niaid.nih.gov
    • data.niaid.nih.gov
    Updated Mar 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leandro Arab Marcomini; Andre Luiz Cunha (2023). Truck Image Dataset [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_5744736
    Explore at:
    Dataset updated
    Mar 4, 2023
    Dataset provided by
    University of Sao Paulo
    Authors
    Leandro Arab Marcomini; Andre Luiz Cunha
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Collection of annotated truck images, from a side point view, used to extract information about truck axles, collected on a highway in the State of São Paulo, Brazil. This is still a work in progress dataset and will be updated regularly, as new images are acquired. More info can be found on: Researchgate Lab Page, OrcID Profiles, or ITS Lab page on Github.

    The dataset includes 727 cropped images of trucks, taken with three different cameras, on five different locations.

    727 images

    Format: JPG

    Resolution: 1920xVarious, 96dpi, 24bits

    Naming pattern: _--.jpg

    All annotated objects were created with LabelMe, and saved in JSON files for each image. For more information about the annotation format, please refer to the LabelMe documentation.

    Annotated objects are all related to truck axles, in 4 categories, Truck, Axle, Tandem, Tridem. Tandem is a double axle composition, and tridem is a triple axle composition. The number of objects in each category is as follows:

    Truck: 736

    Axle: 2711

    Tandem: 809

    Tridem: 130

    If this dataset helps in any way your research, please feel free to contact the authors. We really enjoy knowing about other researcher's projects and how everybody is making use of the images on this dataset. We are also open for collaborations and to answer any questions. We also have a paper that uses this dataset, so if you want to officially cite us in your research, please do so! We appreciate it!

    Marcomini, Leandro Arab, and André Luiz Cunha. "Truck Axle Detection with Convolutional Neural Networks." arXiv preprint arXiv:2204.01868 (2022).

  18. S

    Open Pit Mine Object Detection Dataset

    • scidb.cn
    • figshare.com
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lin Gang (2024). Open Pit Mine Object Detection Dataset [Dataset]. http://doi.org/10.57760/sciencedb.15682
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Science Data Bank
    Authors
    Lin Gang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Open-Pit-Mine-Object-Detection-Dataset is a remarkable and specialized collection. It consists solely of remote sensing images of open-pit mines and their corresponding object detection bounding boxes. These bounding boxes are painstakingly hand-annotated using labelme, and the dataset provides annotations in JSON format. The remote sensing images offer a detailed and comprehensive view of the mine landscapes, presenting a wealth of visual information. With the precisely hand-annotated JSON format bounding boxes, researchers and developers have a valuable resource at their disposal. They can utilize this dataset with confidence to train and enhance object detection algorithms that are specifically designed for open-pit mines. This, in turn, holds great potential for improving safety and efficiency in mining operations, as accurate object detection can lead to better monitoring, management, and decision-making in the complex environment of open-pit mines.

  19. CleavageEmbryo Dataset

    • zenodo.org
    Updated Sep 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chensheng Zhang; Chensheng Zhang (2024). CleavageEmbryo Dataset [Dataset]. http://doi.org/10.5281/zenodo.13790163
    Explore at:
    Dataset updated
    Sep 21, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Chensheng Zhang; Chensheng Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset of Cleavage-Stage Embryo with pixel-level anntotations of blastomeres and fragments.

    Information:

    a) source:First School of Clinical Medicine, Wuhan University.

    b) annotation: The images are annotated by three experienced doctors from Renmin Hospital of Wuhan University using the LabelMe.

    c) categories:

    Blastomeres: Detailed segmentation of individual blastomeres.

    Fragments: Identification and segmentation of fragments, which are critical for assessing embryo quality.

    Background: Non-embryonic regions to assist in accurate segmentation.

  20. Annotated dataset of microscope images of pollen grains in honey from 17...

    • zenodo.org
    zip
    Updated Oct 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CHRYSOULA TANANAKI; CHRYSOULA TANANAKI; Dimitrios Kanelis; Dimitrios Kanelis; Vasilios Liolios; Vasilios Liolios; Nikos Grammalidis; Nikos Grammalidis (2023). Annotated dataset of microscope images of pollen grains in honey from 17 beekeeping taxa [Dataset]. http://doi.org/10.5281/zenodo.10017809
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 19, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    CHRYSOULA TANANAKI; CHRYSOULA TANANAKI; Dimitrios Kanelis; Dimitrios Kanelis; Vasilios Liolios; Vasilios Liolios; Nikos Grammalidis; Nikos Grammalidis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Annotated dataset of microscope images of pollen grains in honey from 17 beekeeping taxa

    Melissopalynology is a method based on the separation of pollen grains present in honey and the identification of the plant species to which they belong. It is used to determine the botanical, but also the geographical origin of the honey, as well as its commercial value. For this reason, a database including microscope images and characteristics of pollen grains of 17 beekeeping taxa, usually present in honey samples, was created.

    For the honey preparations the methodology of Louveaux et al. (1978) and Von Der Ohe et al. (2004) was followed. Specifically, 5.0 g of honey were weighed and dissolved in 10 ml of distilled water. The solution was centrifuged for 10 min at 2300 r/min. The supernatant solution was discarded and the precipitate was transferred with a disposable plastic Pasteur pipette onto a slide, where it was spread with the addition of fuchsin on a 22 x 22 mm surface. Staining with fuchsin helps to see in greater detail the morphological characteristics of the pollen grains. The preparation was dried by gentle heating to 40°C, on a heating plate and covered with a coverslip on which a small amount of Entellan adhesive (Merck) has been placed. The pollen grains were photographed on an optical microscope (Olympus SZX12), with lens 40× (Olympus DF PLAPO 1X DF) and a digital analysis camera (Olympus SC30), while a morphometry software (Image Pro Plus Software, V1.1.19) was used for their determination. For the microscopic identification of the pollen types, the collection of reference slides from the Laboratory of Apiculture of the Aristotle University of Thessaloniki, which is accredited to ISO 17025:2017, was used.

    The dataset contains 1404 training captured microscope images of pollen grains from 17 major beekeeping taxa (class list can be found below) and 85 testing captured images. Polygon annotations were created using LabelMe software and saved in COCO Annotation format (train.json and val.json files).

    Further information about the related project (SmartBeeKeep) can be found in the following article and presentation (please site if you use these data):

    • Vasilios Liolios, Dimitrios Kanelis, Maria-Anna Rodopoulou, Chrysoula Tananaki (2023). A Comparative Study of Methods Recording Beekeeping Flora. Forests, 14(8), 1677; https://doi.org/10.3390/f14081677
    • Nikos Grammalidis, Andreas Stergioulas, Aggelos Avramidis, Konstantinos Karystinakis, Athanasios Partozis, Athanasios Topaloudis, Georgia Kalantzi, Chrisoula Tananaki, Dimitrios Kanelis, Vasilis Liolios, and Madesis Panagiotis "A smart beekeeping platform based on remote sensing and artificial intelligence", Proc. SPIE 12786, Ninth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2023), 127860C (21 September 2023); https://doi.org/10.1117/12.2681866 Event: Ninth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2023), 2023, Ayia Napa, Cyprus Author preprint available

    Annotation - Latin name

    Myrtus - Myrtus communis

    Brassicaceae - Brassicaceae

    Cercis - Cercis siliquastrum

    Helianthus annuus - Helianthus annuus

    Lavandula - Lavandula angustifolia

    Robinia pseudacacia - Robinia pseudoacacia

    Olea - Olea europaea

    Citrus - Citrus sp.

    Paliurus - Paliurus spina-christi

    Eucalyptus - Eucalyptus sp.

    Polygonum - Polygonum aviculare

    Carduus - Silybum marianum

    Cistus - Cistus sp.

    thymus - Thymus sp.

    Castanea - Castanea sativa

    erica - Erica manipuliflora

    Gossypium - Gossypium hirsutum

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Shubham (2022). HappyWhales LabelMe Segmentation Dataset [Dataset]. https://www.kaggle.com/datasets/shubhambaid/happywhales-labelme-segmentation-dataset
Organization logo

HappyWhales LabelMe Segmentation Dataset

Annotated images for HappyWhale dataset done using LabelMe

Explore at:
zip(158215233 bytes)Available download formats
Dataset updated
Feb 14, 2022
Authors
Shubham
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

Dataset

This dataset was created by Shubham

Released under CC0: Public Domain

Contents

Search
Clear search
Close search
Google apps
Main menu