Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Explore and download labeled image datasets for AI, ML, and computer vision. Find datasets for object detection, image classification, and image segmentation.
http://www.gnu.org/licenses/gpl-3.0.en.htmlhttp://www.gnu.org/licenses/gpl-3.0.en.html
A complete description of this dataset is available at https://robotology.github.io/iCubWorld .
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains 4,599 high-quality, annotated images of 25 commonly used chemistry lab apparatuses. The images, each containing structures in real-world settings, have been captured from different angles, backgrounds, and distances, while also undergoing variations in lighting to aid in the robustness of object detection models. Every image has been labeled using bounding box annotation in YOLO and COCO format, alongside the class IDs and normalized bounding box coordinates making object detection more precise. The annotations and bounding boxes have been built using the Roboflow platform.To achieve a better learning procedure, the dataset has been split into three sub-datasets: training, validation, and testing. The training dataset constitutes 70% of the entire dataset, with validation and testing at 20% and 10% respectively. In addition, all images undergo scaling to a standard of 640x640 pixels while being auto-oriented to rectify rotation discrepancies brought about by the EXIF metadata. The dataset is structured in three main folders - train, valid, and test, and each contains images/ and labels/ subfolders. Every image contains a label file containing class and bounding box data corresponding to each detected object.The whole dataset features 6,960 labeled instances per 25 apparatus categories including beakers, conical flasks, measuring cylinders, test tubes, among others. The dataset can be utilized for the development of automation systems, real-time monitoring and tracking systems, tools for safety monitoring, alongside AI educational tools.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In contemporary digital environments
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore our Plant Disease Image Dataset, featuring a diverse collection of labeled images for developing and testing machine learning models in agriculture.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore the TIME Image Dataset, featuring 144 classes of synthetically generated clock images designed for time-based image recognition tasks.
http://www.gnu.org/licenses/gpl-3.0.en.htmlhttp://www.gnu.org/licenses/gpl-3.0.en.html
A complete description of this dataset is available at https://robotology.github.io/iCubWorld .
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The dataset contains two classes: Shells or Pebbles. This dataset can be used to for binary classification tasks to determine whether a certain image constitutes as a shell or a pebble. Cover Image by wirestock on Freepik
I found it cool to create an app with a CV algorithm that could classify whether a certain picture is a shell or image. The next time that I would be visiting a beach, I could just use the app to help me collect either shells or pebbles. 😄
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore the Melanoma Cancer Image Dataset with 13,900 meticulously curated images. Ideal for machine learning, dermatology, and medical education.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The initial 8 classes were collected by Oliva and Torralba [1], and then 5 categories were added by Fei-Fei and Perona [2]; finally, 2 additional categories were introduced by Lazebnik et al. [3]. [1] A. Oliva and A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” IJCV, 2001. [2] L. Fei-Fei and P. Perona, “A bayesian hierarchical model for learning natural scene categories,” CVPR, 2005. [3] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” CVPR, 2006.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore our Human Dataset featuring 1000 high-resolution (1024x1024) images, equally divided by gender and covering five age groups.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Images of landmarks within the context of their environment
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Computer Vision Approach In Identifying Ectoparasites Image Dataset V2 is a dataset for object detection tasks - it contains Ticks annotations for 2,034 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset for this project is characterised by photos of individual human emotion expression and these photos are taken with the help of both digital camera and a mobile phone camera from different angles, posture, background, light exposure, and distances. This task might look and sound very easy but there were some challenges encountered along the process which are reviewed below: 1) People constraint One of the major challenges faced during this project is getting people to participate in the image capturing process as school was on vacation, and other individuals gotten around the environment were not willing to let their images be captured for personal and security reasons even after explaining the notion behind the project which is mainly for academic research purposes. Due to this challenge, we resorted to capturing the images of the researcher and just a few other willing individuals. 2) Time constraint As with all deep learning projects, the more data available the more accuracy and less error the result will produce. At the initial stage of the project, it was agreed to have 10 emotional expression photos each of at least 50 persons and we can increase the number of photos for more accurate results but due to the constraint in time of this project an agreement was later made to just capture the researcher and a few other people that are willing and available. These photos were taken for just two types of human emotion expression that is, “happy” and “sad” faces due to time constraint too. To expand our work further on this project (as future works and recommendations), photos of other facial expression such as anger, contempt, disgust, fright, and surprise can be included if time permits. 3) The approved facial emotions capture. It was agreed to capture as many angles and posture of just two facial emotions for this project with at least 10 images emotional expression per individual, but due to time and people constraints few persons were captured with as many postures as possible for this project which is stated below: Ø Happy faces: 65 images Ø Sad faces: 62 images There are many other types of facial emotions and again to expand our project in the future, we can include all the other types of the facial emotions if time permits, and people are readily available. 4) Expand Further. This project can be improved furthermore with so many abilities, again due to the limitation of time given to this project, these improvements can be implemented later as future works. In simple words, this project is to detect/predict real-time human emotion which involves creating a model that can detect the percentage confidence of any happy or sad facial image. The higher the percentage confidence the more accurate the facial fed into the model. 5) Other Questions Can the model be reproducible? the supposed response to this question should be YES. If and only if the model will be fed with the proper data (images) such as images of other types of emotional expression.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
A curated collection of ultra-high-resolution Ferrari car images, scraped from WSupercars.com and neatly organized by model. This dataset is ideal for machine learning, computer vision, and creative applications such as wallpaper generators, AR design tools, and synthetic data modeling. All images are native 3840×2160 resolution perfect for both research and visual content creation.
📌 Educational and research use only — All images are copyright of their respective owners.
Folder: ferrari_images/ Subfolders by car model (e.g., f80, 812, sf90) Each folder contains multiple ultra-HD wallpapers (3840×2160)
- Car Model Classification – Train AI to recognize different Ferrari models
- Vision Tasks – Use for super-resolution, enhancement, detection, and segmentation
- Generative Models – Ideal input for GANs, diffusion models, or neural style transfer
- Wallpaper & Web Apps – Populate high-quality visual content for websites or mobile platforms
- Fine-Tuning Vision Models – Compatible with CNNs, ViTs, and transformer architectures
- Self-Supervised Learning – Leverage unlabeled images for contrastive training methods
- Game/Simulation Prototyping – Use as visual references or placeholders in 3D environments
- AR & Design Tools – Integrate into automotive mockups, design UIs, or creative workflows
- This release includes only Ferrari vehicle images
- All images are native UHD (3840×2160), with no duplicates or downscaled versions
- Novitec-tuned models are included both in the
novitec/
folder and within their respective model folders(e.g., 296/, sf90/)
for convenience.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The CAPA Apple Quality Grading Multi-Spectral Image Database consists of multispectral (450nm, 500nm, 750nm, and 800nm) images of health and defected apples of bi-color, manual segmentations of defected regions, and expert evaluations of the apples into 4 quality categories. The defect types consist of bruise, rot, flesh damage, frost damage, russet, etc. The database can be used for academic or research purposes with the aim of computer vision based apple quality inspection.
The CAPA Apple Quality Grading Multi-Spectral Image Database is a propriety of ULG (Gembloux Agro-Bio Tech) - Belgium, and cannot be used without the consent of the ULG (Gembloux Agro-Bio Tech), Belgium.
For consent, contact
Devrim Unay, İzmir University of Economics, Turkey: unaydevrim@gmail.com
OR
Marie-France Destain, Gembloux Agro-Bio Tech, Belgium: mfdestain@ulg.ac.be
In disseminating results using this database,
1. the author should indicate in the manuscript that it was acquired by ULG (Gembloux Agro-Bio Tech), Belgium.
2. cite the following article Kleynen, O., Leemans, V., & Destain, M.-F. (2005). Development of a multi-spectral vision system for the detection of defects on apples. Journal of Food Engineering, 69(1), 41-49.
Relevant publications:
Kleynen et al., 2003 O. Kleynen, V. Leemans and M.F. Destain, Selection of the most efficient wavelength bands for ‘Jonagold’ apple sorting. Postharv. Biol. Technol., 30 (2003), pp. 221–232.
Leemans and Destain, 2004 V. Leemans and M.F. Destain, A real-time grading method of apples based on features extracted from defects. J. Food Eng., 61 (2004), pp. 83–89.
Leemans et al., 2002 V. Leemans, H. Magein and M.F. Destain, On-line fruit grading according to their external quality using machine vision. Biosyst. Eng., 83 (2002), pp. 397–404.
Unay and Gosselin, 2006 D. Unay and B. Gosselin, Automatic defect detection of ‘Jonagold’ apples on multi-spectral images: A comparative study. Postharv. Biol. Technol., 42 (2006), pp. 271–279.
Unay and Gosselin, 2007 D. Unay and B. Gosselin, Stem and calyx recognition on ‘Jonagold’ apples by pattern recognition. J. Food Eng., 78 (2007), pp. 597–605.
Unay et al., 2011 Unay, D., Gosselin, B., Kleynen, O, Leemans, V., Destain, M.-F., Debeir, O, “Automatic Grading of Bi-Colored Apples by Multispectral Machine Vision”, Computers and Electronics in Agriculture, 75(1), 204-212, 2011.
In August 7 2020, Unsplash released the Unsplash dataset, which provides useful metadata for over 25k images that could be used to train machine learning models.
The metadata is available for download at github, but downloading all the images for training machine learning models is quite a hassle. Therefore I downloaded all the downloadable images in the database into the zip file for everyone to use😊
LAR.i Laboratory - Université du Québec à Chicoutimi (UQAC) 2021-08-24 Name: Image dataset of various soil types in an urban city Published journal paper: Gensytskyy, O., Nandi, P., Otis, M.JD. et al. Soil friction coefficient estimation using CNN included in an assistive system for walking in urban areas. J Ambient Intell Human Comput 14, 14291–14307 (2023). https://doi.org/10.1007/s12652-023-04667-w This dataset contains images of various types of soils and was used for the project "An assistive system for walking in urban areas". The images were taken using a smartphone camera in a vertical orientation and are high-quality. The files are named with two characters, being the first letter and last letter of its class name, following by their number. Capture location : City of Saguenay, Quebec Canada. Class count : 8 Total number of images : 493 Classes and number of images per class: Asphalt (89) Concrete (80) Epoxy_coated_interior (34) Grass (90) Gravel (58) Scrattered_snow (40) Snow (68) Wood (34)
COIL-100 was collected by the Center for Research on Intelligent Systems at the Depart ment of Computer Science, Columbia University. The database contains color images of 100 objects. The objects were placed on a motorized turntable against a black background and images were taken at pose internals of 5 degrees. This dataset was used in a real-time 100 object recognition system whereby a system sensor could identify the object and display its angular pose.
There are 7,200 images of 100 objects. Each object was turned on a turnable through 360 degrees to vary object pose with respect to a fixed color camera. Images of the objects were taken at pose intervals of 5 degrees. This corresponds to 72 poses per object. There images were then size normalized. Objects have a wide variety of complex geometric and reflectance characteristics.
Original data source and banner image: http://www1.cs.columbia.edu/CAVE/software/softlib/coil-100.php
This dataset is intended for non-commercial research purposes only. When using this dataset, please cite:
"Columbia Object Image Library (COIL-100)," S. A. Nene, S. K. Nayar and H. Murase, Technical Report CUCS-006-96, February 1996.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
GasHisSDB is a New Gastric Histopathology Sub-size Image Database with a total of 245196 images. GasHisSDB is divided into 160x160 pixels sub-database, 120x120 pixels sub-database and 80x80 pixels sub-database. GasHisSDB is made to realize the function of evaluating image classification. In order to prove that the methods of different periods in the field of image classification have discrepancies on GasHisSDB, we select a variety of classifiers for evaluation. Seven classical machine learning classifiers, three CNN classifiers and a novel transformer-based classifier are selected for testing on image classification tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Explore and download labeled image datasets for AI, ML, and computer vision. Find datasets for object detection, image classification, and image segmentation.