Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Shubham
Released under CC0: Public Domain
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Labelme is a dataset for object detection tasks - it contains Styles annotations for 675 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Nut & Screw Label (Labelme) is a dataset for instance segmentation tasks - it contains Nut Screw annotations for 415 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The LabelMe project has been run out of MIT for many years, and allows users to upload and annotate images. Since the labels are crowdsourced, they can be of poor quality. I have been proofreading these labels for several months, correcting spelling mistakes and coalescing similar labels into a single label when possible. I have also rejected many labels that did not seem to make sense.
The images in the LabelMe project as well as the raw metadata were downloaded from MIT servers. All data is in the public domain. Images within LabelMe may have been taken as far back as the early 2000s, and run up to the present day.
I have worked through 5% of the LabelMe dataset thus far. I decided to create a dataset pertaining to meals (labels such as plate, glass, napkins, fork, etc.) since there were a fair number of those in the 5% I have curated thus far. Most of the images in this dataset are of table settings.
This dataset contains: 596 unique images 2734 labeled shapes outlining objects in these images 1782 labeled image grids, with a single number representing which portion of a grid cell is filled with a labeled object
Many thanks to the people of the LabelMe project!
I want to see how valuable my curation efforts have been for the LabelMe dataset. I would like to see others build object recognition models using this dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Sketch2aia New Dataset Labelme is a dataset for object detection tasks - it contains GUI Components In Sketches annotations for 402 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterThis dataset was created by work bidit
Facebook
TwitterThis dataset contains the predicted prices of the asset LABELME github.com/wkentaro/LABELME over the next 16 years. This data is calculated initially using a default 5 percent annual growth rate, and after page load, it features a sliding scale component where the user can then further adjust the growth rate to their own positive or negative projections. The maximum positive adjustable growth rate is 100 percent, and the minimum adjustable growth rate is -100 percent.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset was annotated using the labelme tool, and it was trained using the pixellib.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Own Drone Ss is a dataset for semantic segmentation tasks - it contains Human.. annotations for 1,100 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains 1660 images of electric substations with 50705 annotated objects. The images were obtained using different cameras, including cameras mounted on Autonomous Guided Vehicles (AGVs), fixed location cameras and those captured by humans using a variety of cameras. A total of 15 classes of objects were identified in this dataset, and the number of instances for each class is provided in the following table:
Object classes and how many times they appear in the dataset.
Class
Instances
Open blade disconnect
310
Closed blade disconnect switch
5243
Open tandem disconnect switch
1599
Closed tandem disconnect switch
966
Breaker
980
Fuse disconnect switch
355
Glass disc insulator
3185
Porcelain pin insulator
26499
Muffle
1354
Lightning arrester
1976
Recloser
2331
Power transformer
768
Current transformer
2136
Potential transformer
654
Tripolar disconnect switch
2349
All images in this dataset were collected from a single electrical distribution substation in Brazil over a period of two years. The images were captured at various times of the day and under different weather and seasonal conditions, ensuring a diverse range of lighting conditions for the depicted objects. A team of experts in Electrical Engineering curated all the images to ensure that the angles and distances depicted in the images are suitable for automating inspections in an electrical substation.
The file structure of this dataset contains the following directories and files:
images: This directory contains 1660 electrical substation images in JPEG format.
images: This directory contains 1660 electrical substation images in JPEG format.
labels_json: This directory contains JSON files annotated in the VOC-style polygonal format. Each file shares the same filename as its respective image in the images directory.
15_masks: This directory contains PNG segmentation masks for all 15 classes, including the porcelain pin insulator class. Each file shares the same name as its corresponding image in the images directory.
14_masks: This directory contains PNG segmentation masks for all classes except the porcelain pin insulator. Each file shares the same name as its corresponding image in the images directory.
porcelain_masks: This directory contains PNG segmentation masks for the porcelain pin insulator class. Each file shares the same name as its corresponding image in the images directory.
classes.txt: This text file lists the 15 classes plus the background class used in LabelMe.
json2png.py: This Python script can be used to generate segmentation masks using the VOC-style polygonal JSON annotations.
The dataset aims to support the development of computer vision techniques and deep learning algorithms for automating the inspection process of electrical substations. The dataset is expected to be useful for researchers, practitioners, and engineers interested in developing and testing object detection and segmentation models for automating inspection and maintenance activities in electrical substations.
The authors would like to thank UTFPR for the support and infrastructure made available for the development of this research and COPEL-DIS for the support through project PD-2866-0528/2020—Development of a Methodology for Automatic Analysis of Thermal Images. We also would like to express our deepest appreciation to the team of annotators who worked diligently to produce the semantic labels for our dataset. Their hard work, dedication and attention to detail were critical to the success of this project.
Facebook
TwitterInitial Author Description
The LabelMe-12-50k dataset consists of 50,000 JPEG images (40,000 for training and 10,000 for testing), which were extracted from LabelMe [1]. Each image is 256x256 pixels in size. 50% of the images in the training and testing set show a centere object, each belonging to one of the 12 object classes shown in Table 1. The remaining 50% show a randomly selected region of a randomly selected image ("clutter").
The dataset is a quite difficult challenge for object recognition systems because the instances of each object class vary greatly in appearance, lighting conditions, and angles of view. Furthermore, centred objects may be partly occluded or other objects (or parts of them) may be present in the image. See [1] for a more detailed descripton o of the dataset.
Table 1: Object Classes and number of instances in the LabelMe-12-50k dataset
| # | Object class | Instances in training set | Instances in testing set |
|---|---|---|---|
| 1 | person | 4,885 | 1,180 |
| 2 | car | 3,829 | 974 |
| 3 | building | 2,085 | 531 |
| 4 | window | 4,097 | 1,028 |
| 5 | tree | 1,846 | 494 |
| 6 | sign | 954 | 249 |
| 7 | door | 830 | 178 |
| 8 | bookshelf | 391 | 100 |
| 9 | chair | 385 | 88 |
| 10 | table | 192 | 54 |
| 11 | keyboard | 324 | 75 |
| 12 | head | 212 | 49 |
| clutter | 20,000 | 5,000 | |
| total number of images | 40,000 | 10,000 |
Annotation Format:
The dataset archive contains annotation files in two formats:
Human-readable text files (annotation-train.txt and annotation-test.txt), which contain in each line an image file name (without the .jpg extension) and 12 class labels corresponding to the 12 object classes. Binary files (annotation-train.bin and annotation-test.bin), which contain 12 successive 32-bit float values for each image, each value representing the class label of the corresponding class. The file does not contain any meta information (e.g., there is no header). The annotation label values of the two file formats differ slightly because the values in the text files are rounded to the second decimal place. If you want to report recognition rates, you should use the binary annotation files for training and testing because of the more precise label values.
All label values are between -1.0 and 1.0. For the 50% of non-clutter images, the label of the depicted object is set to 1.0. As instances of other object classes may also be present in the image (in object images as well as in clutter images), the other labels either have a value of -1.0 or a value between 0.0 and 1.0. A value of -1.0 is set either if no instance of the object class is present in the image or if the level of overlapping (calculated by the size and position of the object's bounding box) is below a certain threshold. Values above 0.0 are assigned if this threshold is exceeded. A value of 1.0 means that the corresponding object is exactly centered in the image and 160 pixels in size (in its larger dimension), just like the extracted objects.
Recognition Rates:
Currently, the only results shown in Table 2 are from our paper [1]. If you would like to report recognition rates, please send them to uetz at ais.uni-bonn.de, including a link to your publication or a description of the method you used.
Table 2: Training and testing error rates on the LabelMe-12-50k dataset
| Method used | Training error rate | Testing error rate | Reported by... |
|---|---|---|---|
| Locally-connected Neural Pyramid | 3.77% | 16.27% | Uetz and Behnke 2009 [1] |
**Initial Author Citations: **
If you refer to the dataset, please cite:
[1] Rafael Uetz and Sven Behnke, "Large-scale Object Recognition with CUDA-accelerated Hierarchical Neural Networks," Proceedings of the IEEE International Conference on Intelligent Computing and Intelligent Systems 2009 (ICIS 2009) [Download PDF] References:
[2] B.C. Russell, A. Torralba, K.P. Murphy, W.T. Freeman, "LabelMe: A database and web-based tool for image annotation," International Journal of Computer Vision, vol. 77, no. 1-3, pp. 157-173, 2008
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Multiclass Weeds Dataset for Image Segmentation comprises two species of weeds: Soliva Sessilis (Field Burrweed) and Thlaspi Arvense L. (Field Pennycress). Weed images were acquired during the early growth stage under field conditions in a brinjal farm located in Gorakhpur, Uttar Pradesh, India. The dataset contains 7872 augmented images and corresponding masks. Images were captured using various smartphone cameras and stored in RGB color format in JPEG format. The captured images were labeled using the labelme tool to generate segmented masks. Subsequently, the dataset was augmented to generate the final dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset consists of images of the Kalo 1.5 heat cost allocator (HCA) and the Qundis HCA. The dataset has been created for the Reconcycle project. Find information at reconcycle.eu. The objects are positioned in different areas of the Reconcycle workcell, designed by JSI.
The dataset has the following properties:
1577 images with resolution: 1450x1450 pixels (Basler camera )
57 images with resolution: 848 x 480 pixels (Realsense D435 camera)
The images have segmentation annotations labelled using the labelme software.
The original labelme annotations are present and exported to COCO dataset format.
The annotations are in the form of polygon segmentations.
The included COCO train/test split is a 90/10 split.
The images have been annotated with the following labels:
hca_front
hca_back
hca_side1
hca_side2
battery
pcb
internals
pcb_covered
plastic_clip
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Divyanshdixit0902 is a dataset for object detection tasks - it contains Seat annotations for 1,264 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Lt is a dataset for object detection tasks - it contains Lt annotations for 229 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We use industrial cameras to take images of steel wire ropes under different conditions. We put the images of steel wire ropes in five folders, named as:Camera position step up_1; Camera position step up_2; From dark to light; Rotate(360 degrees); Rotate(360 degrees). Images in different folders come from different sources, explained below: Camera position step up_1:Move the camera from bottom to top to obtain images of different positions of the wire rope. Camera position step up_2: The camera rotates at a certain angle with the wire rope as the axis and then moves the camera from bottom to top to obtain images of different positions of the wire rope. From dark to light:Adjust the brightness of the light source to obtain the images of the wire rope under different brightness. Rotate(360 degrees): Rotate the wire rope 360 degrees and randomly take images of the wire rope at different angles. Rotation(free):Apply a certain torque to both ends of the wire rope and then suddenly remove the torque applied to both ends of the wire rope, and randomly take images during the rotation of the wire rope.In addition, the dataset also provides the json file generated manually using labelme. Note: If the network model fails to be trained using the json file, you can consider converting the Chinese in the json file to English. Finally, we also provide dataset usage instructions in the wire rope dataset folder.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Collection of annotated truck images, from a side point view, used to extract information about truck axles, collected on a highway in the State of São Paulo, Brazil. This is still a work in progress dataset and will be updated regularly, as new images are acquired. More info can be found on: Researchgate Lab Page, OrcID Profiles, or ITS Lab page on Github.
The dataset includes 727 cropped images of trucks, taken with three different cameras, on five different locations.
727 images
Format: JPG
Resolution: 1920xVarious, 96dpi, 24bits
Naming pattern: _--.jpg
All annotated objects were created with LabelMe, and saved in JSON files for each image. For more information about the annotation format, please refer to the LabelMe documentation.
Annotated objects are all related to truck axles, in 4 categories, Truck, Axle, Tandem, Tridem. Tandem is a double axle composition, and tridem is a triple axle composition. The number of objects in each category is as follows:
Truck: 736
Axle: 2711
Tandem: 809
Tridem: 130
If this dataset helps in any way your research, please feel free to contact the authors. We really enjoy knowing about other researcher's projects and how everybody is making use of the images on this dataset. We are also open for collaborations and to answer any questions. We also have a paper that uses this dataset, so if you want to officially cite us in your research, please do so! We appreciate it!
Marcomini, Leandro Arab, and André Luiz Cunha. "Truck Axle Detection with Convolutional Neural Networks." arXiv preprint arXiv:2204.01868 (2022).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Open-Pit-Mine-Object-Detection-Dataset is a remarkable and specialized collection. It consists solely of remote sensing images of open-pit mines and their corresponding object detection bounding boxes. These bounding boxes are painstakingly hand-annotated using labelme, and the dataset provides annotations in JSON format. The remote sensing images offer a detailed and comprehensive view of the mine landscapes, presenting a wealth of visual information. With the precisely hand-annotated JSON format bounding boxes, researchers and developers have a valuable resource at their disposal. They can utilize this dataset with confidence to train and enhance object detection algorithms that are specifically designed for open-pit mines. This, in turn, holds great potential for improving safety and efficiency in mining operations, as accurate object detection can lead to better monitoring, management, and decision-making in the complex environment of open-pit mines.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset of Cleavage-Stage Embryo with pixel-level anntotations of blastomeres and fragments.
Information:
a) source:First School of Clinical Medicine, Wuhan University.
b) annotation: The images are annotated by three experienced doctors from Renmin Hospital of Wuhan University using the LabelMe.
c) categories:
Blastomeres: Detailed segmentation of individual blastomeres.
Fragments: Identification and segmentation of fragments, which are critical for assessing embryo quality.
Background: Non-embryonic regions to assist in accurate segmentation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Annotated dataset of microscope images of pollen grains in honey from 17 beekeeping taxa
Melissopalynology is a method based on the separation of pollen grains present in honey and the identification of the plant species to which they belong. It is used to determine the botanical, but also the geographical origin of the honey, as well as its commercial value. For this reason, a database including microscope images and characteristics of pollen grains of 17 beekeeping taxa, usually present in honey samples, was created.
For the honey preparations the methodology of Louveaux et al. (1978) and Von Der Ohe et al. (2004) was followed. Specifically, 5.0 g of honey were weighed and dissolved in 10 ml of distilled water. The solution was centrifuged for 10 min at 2300 r/min. The supernatant solution was discarded and the precipitate was transferred with a disposable plastic Pasteur pipette onto a slide, where it was spread with the addition of fuchsin on a 22 x 22 mm surface. Staining with fuchsin helps to see in greater detail the morphological characteristics of the pollen grains. The preparation was dried by gentle heating to 40°C, on a heating plate and covered with a coverslip on which a small amount of Entellan adhesive (Merck) has been placed. The pollen grains were photographed on an optical microscope (Olympus SZX12), with lens 40× (Olympus DF PLAPO 1X DF) and a digital analysis camera (Olympus SC30), while a morphometry software (Image Pro Plus Software, V1.1.19) was used for their determination. For the microscopic identification of the pollen types, the collection of reference slides from the Laboratory of Apiculture of the Aristotle University of Thessaloniki, which is accredited to ISO 17025:2017, was used.
The dataset contains 1404 training captured microscope images of pollen grains from 17 major beekeeping taxa (class list can be found below) and 85 testing captured images. Polygon annotations were created using LabelMe software and saved in COCO Annotation format (train.json and val.json files).
Further information about the related project (SmartBeeKeep) can be found in the following article and presentation (please site if you use these data):
Annotation - Latin name
Myrtus - Myrtus communis
Brassicaceae - Brassicaceae
Cercis - Cercis siliquastrum
Helianthus annuus - Helianthus annuus
Lavandula - Lavandula angustifolia
Robinia pseudacacia - Robinia pseudoacacia
Olea - Olea europaea
Citrus - Citrus sp.
Paliurus - Paliurus spina-christi
Eucalyptus - Eucalyptus sp.
Polygonum - Polygonum aviculare
Carduus - Silybum marianum
Cistus - Cistus sp.
thymus - Thymus sp.
Castanea - Castanea sativa
erica - Erica manipuliflora
Gossypium - Gossypium hirsutum
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Shubham
Released under CC0: Public Domain