Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository introduces a dataset of obverse and reverse images of 319 unique Schlage SC1 keys, labeled with each key's bitting code. We make our data accessible in an HDF5 format, through arrays aligned where the Nth index of each array represents the Nth key, with keys sorted ascending by bitting code: /bittings: Each keys 1-9 bitting code, recorded from shoulder through the tip of the key, uint8 of shape (319, 5). /obverse: Obverse image of each key, uint8 of shape (319, 512, 512, 3). /reverse: Reverse image of each key, uint8 of shape (319, 512, 512, 3).
Full dataset details available on GitHub https://github.com/alexxke/keynet
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An open source Optical Coherence Tomography Image Database containing different retinal OCT images with different pathological conditions. Please use the following citation if you use the database: Peyman Gholami, Priyanka Roy, Mohana Kuppuswamy Parthasarathy, Vasudevan Lakshminarayanan, "OCTID: Optical Coherence Tomography Image Database", arXiv preprint arXiv:1812.07056, (2018). For more information and details about the database see: https://arxiv.org/abs/1812.07056
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This link consists of 10 anonymized non-small cell lung cancer (NSCLC) field of Views (FoVs) to test Mistic.
Mistic
Understanding the complex ecology of a tumor tissue and the spatio-temporal relationships between its cellular and microenvironment components is becoming a key component of translational research, especially in immune-oncology. The generation and analysis of multiplexed images from patient samples is of paramount importance to facilitate this understanding. In this work, we present Mistic, an open-source multiplexed image t-SNE viewer that enables the simultaneous viewing of multiple 2D images rendered using multiple layout options to provide an overall visual preview of the entire dataset. In particular, the positions of the images can be taken from t-SNE or UMAP coordinates. This grouped view of all the images further aids an exploratory understanding of the specific expression pattern of a given biomarker or collection of biomarkers across all images, helps to identify images expressing a particular phenotype or to select images for subsequent downstream analysis. Currently there is no freely available tool to generate such image t-SNEs.
Links
Mistic code
Mistic documentation
Paper
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deep Plastic
Information:
Object Detection Model
Google Colab Links
Note: Click on File and Save Copy in Drive. If you try to edit my file it'll ask you for permission and send me an email. Please make your own copy.
DeepTrash DataSet
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
So I have a knack of photography and travelling. I wanted to create a model for myself which can classify my own pictures. But to be honest, a Data Scientist should always know how to collect data. So I scraped data from google images using a Python Script and using other open-source data sources from MIT, Kaggle itself, etc. Request everyone to give a try. I'll update the no. of images in validation set as time goes on.
The link to the scripting file is here: https://github.com/debadridtt/Scraping-Google-Images-using-Python
The images belong typically to 4 classes:
Facebook
TwitterThis Image Gallery is provided as a complimentary source of high-quality digital photographs available from the Agricultural Research Service information staff. Photos, (over 2,000 .jpegs) in the Image Gallery are copyright-free, public domain images unless otherwise indicated. Resources in this dataset:Resource Title: USDA ARS Image Gallery (Web page) . File Name: Web Page, url: https://www.ars.usda.gov/oc/images/image-gallery/ Over 2000 copyright-free images from ARS staff.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Crater-analog dataset acquired with a drone setup at the RIC-DFKI center. The dataset can be used to bridge the domain gap for image processing applications for lunar and small-body missions.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains information about various open-source pre-trained models that are available on Kaggle. These models can be used for various machine learning and deep learning tasks such as image classification, natural language processing, object detection, etc. The dataset has the following features:
The dataset can be useful for anyone who wants to explore different pre-trained models and compare their performance and features. It can also help in finding suitable models for specific problems or domains.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The Real to Ghibli Image Dataset is a high-quality collection of 5,000 images designed for AI-driven style transfer and artistic transformations. This dataset is ideal for training GANs, CycleGAN, diffusion models, and other deep learning applications in image-to-image translation.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23711013%2F7a52fb9b932a4ac19586000e6bf0138e%2Freal%20and%20jhibli%20datset%20thumbnail.jpg?generation=1743873645295668&alt=media" alt="">
It consists of two separate subsets:
- trainA (2,500 Real-World Images) → A diverse collection of human faces, landscapes, rivers, mountains, forests, buildings, vehicles, and more.
- trainB_ghibli (2,500 Ghibli-Style Images) → Stylized images inspired by Studio Ghibli movies, including animated characters, landscapes, and artistic compositions.
Unlike paired datasets, this collection contains independent images in each subset, making it suitable for unsupervised learning approaches.
image_id → Unique identifier image_type → Real-world or Ghibli-style category → Scene type (face, landscape, vehicle, etc.) This dataset is valuable for:
✅ Training AI models for style transfer (GANs, CycleGAN, Diffusion models, etc.)
✅ Enhancing image-to-image translation research
✅ Studying artistic style emulation & deep learning techniques
✅ Creating AI-based Ghibli-style artwork generators
✅ Experimenting with AI-driven animation and artistic rendering
The dataset is manually curated from diverse open-source, royalty-free, and AI-generated sources to ensure high quality.
📌 License: Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0)
- ✅ Allowed: Research, academic projects, and personal AI model training.
- ❌ Not Allowed: Commercial use (e.g., selling models trained on this dataset).
- Attribution Required: Proper credit must be given when using this dataset in research/publications.
⚠ Copyright Disclaimer for Ghibli-Style Images
- Some trainB_ghibli images may originate from Studio Ghibli-inspired artworks. These images are provided strictly for research and educational purposes.
- Commercial use of Ghibli-style images is strictly prohibited unless you have explicit permission.
- Users must ensure legal compliance when using these images in their projects.
🚀 Planned updates:
🔹 Expanding the dataset with more diverse artistic styles (e.g., watercolor, cyberpunk, oil painting)
🔹 Creating an interactive AI tool for real-time style transfer
🔹 Integrating semantic segmentation for better style adaptation
If you find this dataset useful, consider supporting my work! Your contributions help in expanding and improving the dataset.
☕ Buy me a coffee → https://buymeacoffee.com/skshivam77n
📲 GPay (UPI ID) → skshivam771-3@oksbi
Your support allows me to curate more datasets & enhance AI research! 🚀
Facebook
TwitterOpen Images is a dataset of ~9M images that have been annotated with image-level labels and object bounding boxes.
The training set of V4 contains 14.6M bounding boxes for 600 object classes on 1.74M images, making it the largest existing dataset with object location annotations. The boxes have been largely manually drawn by professional annotators to ensure accuracy and consistency. The images are very diverse and often contain complex scenes with several objects (8.4 per image on average). Moreover, the dataset is annotated with image-level labels spanning thousands of classes.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('open_images_v4', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/open_images_v4-original-2.0.0.png" alt="Visualization" width="500px">
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The dataset contains multi-modal data from over 70,000 open access and de-identified case reports, including metadata, clinical cases, image captions and more than 130,000 images. Images and clinical cases belong to different medical specialties, such as oncology, cardiology, surgery and pathology. The structure of the dataset allows to easily map images with their corresponding article metadata, clinical case, captions and image labels. Details of the data structure can be found in the file data_dictionary.csv.
More than 90,000 patients and 280,000 medical doctors and researchers were involved in the creation of the articles included in this dataset. The citation data of each article can be found in the metadata.parquet file.
Refer to the examples showcased in this GitHub repository to understand how to optimize the use of this dataset.The license of the dataset as a whole is CC BY-NC-SA. However, its individual contents may have less restrictive license types (CC BY, CC BY-NC, CC0). For instance, regarding image filess, 66K of them are CC BY, 32K are CC BY-NC-SA, 32K are CC BY-NC, and 20 of them are CC0.
Facebook
TwitterA dataset of high quality is one of the key factors to train a neural network. Unfortunately, there are few open-source abdominal ultrasound image datasets.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset consists of 5000 images (20% of the original dataset's images) from the Unsplash Lite Open-Source Dataset. All credits go to Unsplash for this astounding dataset. To access the original dataset, please refer to the following links: - Dataset page: Unsplash Dataset Page - GitHub page: Unsplash Dataset GitHub Page
This dataset is intended for developing an image colorization model. Please acknowledge Unsplash as the rightful creator in all public publications when using this or their original dataset.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains resized and cropped free-use stock photos and corresponding image attributes.
All the images have minimum dimensions of 768p and maximum dimensions that are multiples of 32. Each one has a set of image attributes associated with it. Many entries are missing some values, but all should at least have a title.
Depth maps are available as a separate dataset. - Depth Maps: Pexels 110k 768p JPEG Depth Maps
https://github.com/cj-mills/pexels-dataset/raw/main/images/3185509-img-depth-pair-768p.png">
img_id | 3186010 |
title | Pink and White Ice Cream Neon Signage |
aspect_ratio | 0.749809 |
main_color | [128, 38, 77] |
colors | [#000000, #a52a2a, #bc8f8f, #c71585, #d02090, #d8bfd8] |
tags | [bright, chocolate, close-up, cold, cream, creamy, cup, dairy product, delicious, design, dessert, electricity, epicure, flavors, fluorescent, food, food photography, goody, hand, ice cream, icecream, illuminated, indulgence, light pink background, neon, neon lights, neon sign, pastry, pink background, pink wallpaper, scoop, sweet, sweets, tasty] |
adult | very_unlikely |
aperture | 1.8 |
camera | iPhone X |
focal_length | 4.0 |
google_place_id | ChIJkUjxJ7it1y0R4qOVTbWHlR4 ... |
Facebook
Twittercardiology
Facebook
TwitterThis is a whole slide image (WSI) dataset for glomeruli segmentation on kidney tissue, in total 88 images. The train-set (58 images) and test-set (32 images) has been used in the Orbit publication (1) to train and test the glomeruli segmentation model (2).
The images are pyramidal tiff images (tiled, jpeg-compression) and can be displayed with Orbit Image Analysis (3).
The file orbit.db is a sqllite database which contains the manual drawn glomeruli annotations for all images, in total 21037 annotations. It can be placed in the user-home folder, then Orbit Image Analysis (3) will detect the database and show the glomeruli annotations in the annotation tab when opening an image. (Orbit will use the md5 hashes of the images for identification.)
For more information on how to train a CNN model or to use the existing model (2) please visit the Orbit deep learning page (4).
(1) Manuel Stritt, Anna K. Stalder, Enrico Vezzali; Orbit Image Analysis: An open-source whole slide image analysis...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Activities of Daily Living Object DatasetOverviewThe ADL (Activities of Daily Living) Object Dataset is a curated collection of images and annotations specifically focusing on objects commonly interacted with during daily living activities. This dataset is designed to facilitate research and development in assistive robotics in home environments.Data Sources and LicensingThe dataset comprises images and annotations sourced from four publicly available datasets:COCO DatasetLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation:Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft COCO: Common Objects in Context. European Conference on Computer Vision (ECCV), 740–755.Open Images DatasetLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation:Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Duerig, T., & Ferrari, V. (2020). The Open Images Dataset V6: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale. International Journal of Computer Vision, 128(7), 1956–1981.LVIS DatasetLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation:Gupta, A., Dollar, P., & Girshick, R. (2019). LVIS: A Dataset for Large Vocabulary Instance Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5356–5364.Roboflow UniverseLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation: The following repositories from Roboflow Universe were used in compiling this dataset:Work, U. AI Based Automatic Stationery Billing System Data Dataset. 2022. Accessible at: https://universe.roboflow.com/university-work/ai-based-automatic-stationery-billing-system-data (accessed on 11 October 2024).Destruction, P.M. Pencilcase Dataset. 2023. Accessible at: https://universe.roboflow.com/project-mental-destruction/pencilcase-se7nb (accessed on 11 October 2024).Destruction, P.M. Final Project Dataset. 2023. Accessible at: https://universe.roboflow.com/project-mental-destruction/final-project-wsuvj (accessed on 11 October 2024).Personal. CSST106 Dataset. 2024. Accessible at: https://universe.roboflow.com/personal-pgkq6/csst106 (accessed on 11 October 2024).New-Workspace-kubz3. Pencilcase Dataset. 2022. Accessible at: https://universe.roboflow.com/new-workspace-kubz3/pencilcase-s9ag9 (accessed on 11 October 2024).Finespiralnotebook. Spiral Notebook Dataset. 2024. Accessible at: https://universe.roboflow.com/finespiralnotebook/spiral_notebook (accessed on 11 October 2024).Dairymilk. Classmate Dataset. 2024. Accessible at: https://universe.roboflow.com/dairymilk/classmate (accessed on 11 October 2024).Dziubatyi, M. Domace Zadanie Notebook Dataset. 2023. Accessible at: https://universe.roboflow.com/maksym-dziubatyi/domace-zadanie-notebook (accessed on 11 October 2024).One. Stationery Dataset. 2024. Accessible at: https://universe.roboflow.com/one-vrmjr/stationery-mxtt2 (accessed on 11 October 2024).jk001226. Liplip Dataset. 2024. Accessible at: https://universe.roboflow.com/jk001226/liplip (accessed on 11 October 2024).jk001226. Lip Dataset. 2024. Accessible at: https://universe.roboflow.com/jk001226/lip-uteep (accessed on 11 October 2024).Upwork5. Socks3 Dataset. 2022. Accessible at: https://universe.roboflow.com/upwork5/socks3 (accessed on 11 October 2024).Book. DeskTableLamps Material Dataset. 2024. Accessible at: https://universe.roboflow.com/book-mxasl/desktablelamps-material-rjbgd (accessed on 11 October 2024).Gary. Medicine Jar Dataset. 2024. Accessible at: https://universe.roboflow.com/gary-ofgwc/medicine-jar (accessed on 11 October 2024).TEST. Kolmarbnh Dataset. 2023. Accessible at: https://universe.roboflow.com/test-wj4qi/kolmarbnh (accessed on 11 October 2024).Tube. Tube Dataset. 2024. Accessible at: https://universe.roboflow.com/tube-nv2vt/tube-9ah9t (accessed on 11 October 2024). Staj. Canned Goods Dataset. 2024. Accessible at: https://universe.roboflow.com/staj-2ipmz/canned-goods-isxbi (accessed on 11 October 2024).Hussam, M. Wallet Dataset. 2024. Accessible at: https://universe.roboflow.com/mohamed-hussam-cq81o/wallet-sn9n2 (accessed on 14 October 2024).Training, K. Perfume Dataset. 2022. Accessible at: https://universe.roboflow.com/kdigital-training/perfume (accessed on 14 October 2024).Keyboards. Shoe-Walking Dataset. 2024. Accessible at: https://universe.roboflow.com/keyboards-tjtri/shoe-walking (accessed on 14 October 2024).MOMO. Toilet Paper Dataset. 2024. Accessible at: https://universe.roboflow.com/momo-nutwk/toilet-paper-wehrw (accessed on 14 October 2024).Project-zlrja. Toilet Paper Detection Dataset. 2024. Accessible at: https://universe.roboflow.com/project-zlrja/toilet-paper-detection (accessed on 14 October 2024).Govorkov, Y. Highlighter Detection Dataset. 2023. Accessible at: https://universe.roboflow.com/yuriy-govorkov-j9qrv/highlighter_detection (accessed on 14 October 2024).Stock. Plum Dataset. 2024. Accessible at: https://universe.roboflow.com/stock-qxdzf/plum-kdznw (accessed on 14 October 2024).Ibnu. Avocado Dataset. 2024. Accessible at: https://universe.roboflow.com/ibnu-h3cda/avocado-g9fsl (accessed on 14 October 2024).Molina, N. Detection Avocado Dataset. 2024. Accessible at: https://universe.roboflow.com/norberto-molina-zakki/detection-avocado (accessed on 14 October 2024).in Lab, V.F. Peach Dataset. 2023. Accessible at: https://universe.roboflow.com/vietnam-fruit-in-lab/peach-ejdry (accessed on 14 October 2024).Group, K. Tomato Detection 4 Dataset. 2023. Accessible at: https://universe.roboflow.com/kkabs-group-dkcni/tomato-detection-4 (accessed on 14 October 2024).Detection, M. Tomato Checker Dataset. 2024. Accessible at: https://universe.roboflow.com/money-detection-xez0r/tomato-checker (accessed on 14 October 2024).University, A.S. Smart Cam V1 Dataset. 2023. Accessible at: https://universe.roboflow.com/ain-shams-university-byja6/smart_cam_v1 (accessed on 14 October 2024).EMAD, S. Keysdetection Dataset. 2023. Accessible at: https://universe.roboflow.com/shehab-emad-n2q9i/keysdetection (accessed on 14 October 2024).Roads. Chips Dataset. 2024. Accessible at: https://universe.roboflow.com/roads-rvmaq/chips-a0us5 (accessed on 14 October 2024).workspace bgkzo, N. Object Dataset. 2021. Accessible at: https://universe.roboflow.com/new-workspace-bgkzo/object-eidim (accessed on 14 October 2024).Watch, W. Wrist Watch Dataset. 2024. Accessible at: https://universe.roboflow.com/wrist-watch/wrist-watch-0l25c (accessed on 14 October 2024).WYZUP. Milk Dataset. 2024. Accessible at: https://universe.roboflow.com/wyzup/milk-onbxt (accessed on 14 October 2024).AussieStuff. Food Dataset. 2024. Accessible at: https://universe.roboflow.com/aussiestuff/food-al9wr (accessed on 14 October 2024).Almukhametov, A. Pencils Color Dataset. 2023. Accessible at: https://universe.roboflow.com/almas-almukhametov-hs5jk/pencils-color (accessed on 14 October 2024).All images and annotations obtained from these datasets are released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits sharing and adaptation of the material in any medium or format, for any purpose, even commercially, provided that appropriate credit is given, a link to the license is provided, and any changes made are indicated.Redistribution Permission:As all images and annotations are under the CC BY 4.0 license, we are legally permitted to redistribute this data within our dataset. We have complied with the license terms by:Providing appropriate attribution to the original creators.Including links to the CC BY 4.0 license.Indicating any changes made to the original material.Dataset StructureThe dataset includes:Images: High-quality images featuring ADL objects suitable for robotic manipulation.Annotations: Bounding boxes and class labels formatted in the YOLO (You Only Look Once) Darknet format.ClassesThe dataset focuses on objects commonly involved in daily living activities. A full list of object classes is provided in the classes.txt file.FormatImages: JPEG format.Annotations: Text files corresponding to each image, containing bounding box coordinates and class labels in YOLO Darknet format.How to Use the DatasetDownload the DatasetUnpack the Datasetunzip ADL_Object_Dataset.zipHow to Cite This DatasetIf you use this dataset in your research, please cite our paper:@article{shahria2024activities, title={Activities of Daily Living Object Dataset: Advancing Assistive Robotic Manipulation with a Tailored Dataset}, author={Shahria, Md Tanzil and Rahman, Mohammad H.}, journal={Sensors}, volume={24}, number={23}, pages={7566}, year={2024}, publisher={MDPI}}LicenseThis dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).License Link: https://creativecommons.org/licenses/by/4.0/By using this dataset, you agree to provide appropriate credit, indicate if changes were made, and not impose additional restrictions beyond those of the original licenses.AcknowledgmentsWe gratefully acknowledge the use of data from the following open-source datasets, which were instrumental in the creation of our specialized ADL object dataset:COCO Dataset: We thank the creators and contributors of the COCO dataset for making their images and annotations publicly available under the CC BY 4.0 license.Open Images Dataset: We express our gratitude to the Open Images team for providing a comprehensive dataset of annotated images under the CC BY 4.0 license.LVIS Dataset: We appreciate the efforts of the LVIS dataset creators for releasing their extensive dataset under the CC BY 4.0 license.Roboflow Universe:
Facebook
Twitter
According to our latest research, the global Image Dataset market size reached USD 2.91 billion in 2024, with a robust year-on-year growth trajectory. The market is anticipated to expand at a CAGR of 21.5% from 2025 to 2033, culminating in a projected market value of USD 20.2 billion by 2033. The primary growth drivers include the proliferation of artificial intelligence (AI) and machine learning (ML) applications across various industries, the increasing need for high-quality annotated data for model training, and the accelerated adoption of computer vision technologies. As per the latest research, the surge in demand for image datasets is fundamentally transforming industries such as healthcare, automotive, and retail, where visual data is pivotal to innovation and automation.
A key growth factor for the Image Dataset market is the exponential rise in AI-driven solutions that rely heavily on large, diverse, and accurately labeled datasets. The sophistication of deep learning algorithms, particularly convolutional neural networks (CNNs), has heightened the necessity for high-quality image datasets to ensure reliable and accurate model performance. Industries like healthcare utilize medical imaging datasets for diagnostics and treatment planning, while autonomous vehicles depend on vast and varied image datasets to enhance object detection and navigation capabilities. Furthermore, the growing trend of synthetic data generation is addressing data scarcity and privacy concerns, providing scalable and customizable datasets for training robust AI models.
Another critical driver is the rapid adoption of computer vision across multiple sectors, including security and surveillance, agriculture, and manufacturing. Organizations are increasingly leveraging image datasets to automate visual inspection, monitor production lines, and implement advanced safety systems. The retail and e-commerce segment has witnessed a significant uptick in demand for image datasets to power recommendation engines, virtual try-on solutions, and inventory management systems. The expansion of facial recognition technology in both public and private sectors, for applications ranging from access control to personalized marketing, further underscores the indispensable role of comprehensive image datasets in enabling innovative services and solutions.
The market is also witnessing a surge in partnerships and collaborations between dataset providers, research institutions, and technology companies. This collaborative ecosystem fosters the development of diverse and high-quality datasets tailored to specific industry requirements. The increasing availability of open-source and publicly accessible image datasets is democratizing AI research and innovation, enabling startups and academic institutions to contribute to advancements in computer vision. However, the market continues to grapple with challenges related to data privacy, annotation accuracy, and the ethical use of visual data, which are prompting the development of secure, compliant, and ethically sourced datasets.
Regionally, North America remains at the forefront of the Image Dataset market, driven by a mature AI ecosystem, significant investments in research and development, and the presence of major technology companies. Asia Pacific is rapidly emerging as a high-growth region, buoyed by expanding digital infrastructure, government initiatives promoting AI adoption, and a burgeoning startup landscape. Europe is also witnessing robust growth, particularly in sectors such as automotive, healthcare, and manufacturing, where regulatory frameworks emphasize data privacy and quality. The Middle East & Africa and Latin America are gradually catching up, with increasing investments in smart city projects and digital transformation initiatives fueling demand for image datasets.
The Image Dataset market by type is segmented into Labeled, Unlabeled, and Synthetic datasets. Labeled datasets, which include images annotated with relevant metadata or tags, are fundamental to sup
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This database provides a collection of myocardial perfusion scintigraphy images in DICOM format with all metadata and segmentations (masks) in NIfTI format. The images were obtained from patients undergoing scintigraphy examinations to investigate cardiac conditions such as ischemia and myocardial infarction. The dataset encompasses a diversity of clinical cases, including various perfusion patterns and underlying cardiac conditions. All images have been properly anonymized, and the age range of the patients is from 20 to 90 years. This database represents a valuable source of information for researchers and healthcare professionals interested in the analysis and diagnosis of cardiac diseases. Moreover, it serves as a foundation for the development and validation of image processing algorithms and artificial intelligence techniques applied to cardiovascular medicine. Available for free on the PhysioNet platform, its aim is to promote collaboration and advance research in nuclear cardiology and cardiovascular medicine, while ensuring the replicability of studies.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains scanning electron microscope (SEM) images and labels from our paper "Towards Unsupervised SEM Image Segmentation for IC Layout Extraction", which are licensed under a Creative Commons Attribution 4.0 International License (CC-BY 4.0). The SEM images cover the logic area of the metal-1 (M1) and metal-2 (M2) layers of a commercial IC produced on a 128 nm technology node. We used an electron energy of 15 keV with a backscattered electron detector and a dwell time of 3 μs for SEM capture. The images are 4096×3536 pixels in size, with a resolution of 14.65 nm per pixel and 10% overlap. We discarded images on the logic area boundaries and publish the remaining ones in random order. We additionally provide labels for tracks and vias on the M2 layer, which are included as .svg files. For labeling, we employed automatic techniques, such as thresholding, edge detection, and size, position, and complexity filtering, before manually validating and correcting the generated labels. The labels may contain duplicates for detected vias. Tracks spanning multiple images may not be present in the label file of each image. The implementation of our approach, as well as accompanying evaluation and utility routines can be found in the following GitHub repository: https://github.com/emsec/unsupervised-ic-sem-segmentation Please make sure to always cite our study when using any part of our data set or code for your own research publications! @inproceedings {2023rothaug, author = {Rothaug, Nils and Klix, Simon and Auth, Nicole and B"ocker, Sinan and Puschner, Endres and Becker, Steffen and Paar, Christof}, title = {Towards Unsupervised SEM Image Segmentation for IC Layout Extraction}, booktitle = {Proceedings of the 2023 Workshop on Attacks and Solutions in Hardware Security}, series = {ASHES'23}, year = {2023}, month = {november}, keywords = {ic-layout-extraction;sem-image-segmentation;unsupervised-deep-learning;open-source-dataset}, url = {https://doi.org/10.1145/3605769.3624000}, doi = {10.1145/3605769.3624000}, isbn = {9798400702624}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA} }
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository introduces a dataset of obverse and reverse images of 319 unique Schlage SC1 keys, labeled with each key's bitting code. We make our data accessible in an HDF5 format, through arrays aligned where the Nth index of each array represents the Nth key, with keys sorted ascending by bitting code: /bittings: Each keys 1-9 bitting code, recorded from shoulder through the tip of the key, uint8 of shape (319, 5). /obverse: Obverse image of each key, uint8 of shape (319, 512, 512, 3). /reverse: Reverse image of each key, uint8 of shape (319, 512, 512, 3).
Full dataset details available on GitHub https://github.com/alexxke/keynet