Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A comprehensive dataset of X-ray images was created for bone fracture detection, specifically designed for computer vision projects. The primary goal of this dataset is to aid in developing and evaluating algorithms for automated bone fracture detection.
The dataset contains images categorized into different classes, each representing a specific type of bone fracture. These classes include Elbow Positive, Fingers Positive, Forearm Fracture, Humerus Fracture, Shoulder Fracture, and Wrist Positive.
Each image in the dataset is annotated with either bounding boxes or pixel-level segmentation masks to indicate the location and extent of the detected fracture. This facilitates the training and evaluation of bone fracture detection algorithms.
The bone fracture detection dataset is a useful resource for researchers and developers who want to train machine learning models, specifically focusing on object detection algorithms, to automatically detect and classify bone fractures in X-ray images. The dataset's diversity of fracture classes enables the development of robust models capable of accurately identifying fractures in different regions of the upper extremities.
The aim of creating this dataset is to accelerate the development of computer vision solutions for automated fracture detection, supporting advancements in medical diagnostics and improving patient care.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Welcome to the "Human Image Dataset for Computer Vision Task"! This dataset is a curated collection of high-quality human images, carefully selected to facilitate a wide range of applications in computer vision. Whether you're working on image processing algorithms, developing image classification models, or exploring image denoising techniques, this dataset provides a rich and diverse set of images for your research and experimentation.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The "26 Class Object Detection Dataset" comprises a comprehensive collection of images annotated with objects belonging to 26 distinct classes. Each class represents a common urban or outdoor element encountered in various scenarios. The dataset includes the following classes:
Bench Bicycle Branch Bus Bushes Car Crosswalk Door Elevator Fire Hydrant Green Light Gun Motorcycle Person Pothole Rat Red Light Scooter Stairs Stop Sign Traffic Cone Train Tree Truck Umbrella Yellow Light These classes encompass a wide range of objects commonly encountered in urban and outdoor environments, including transportation vehicles, traffic signs, pedestrian-related elements, and natural features. The dataset serves as a valuable resource for training and evaluating object detection models, particularly those focused on urban scene understanding and safety applications.
This comprehensive dataset contains a wide range of theoretical questions related to computer science, covering various domains such as operating systems, machine learning, software engineering, computer architecture and design, data structures, and algorithms. The questions are carefully curated to encompass a diverse set of topics, including hardware and software concepts, and are designed to challenge and enhance the knowledge of individuals interested in the computer science field.
The dataset is specifically tailored for training a chatbot or a question-answering system, with a focus on providing accurate and informative answers to technical questions. The questions cover a broad spectrum of complexity, ranging from basic to advanced, and are aimed at assisting users in gaining a deeper understanding of computer science concepts. Whether it's preparing for technical interviews or exams, or simply seeking guidance in the computer science field, this dataset can be a valuable resource for users looking to improve their knowledge and expertise.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset comprises 16.7k images and 2 annotation files, each in a distinct format. The first file, labeled "Label," contains annotations with the original scale, while the second file, named "yolo_format_labels," contains annotations in YOLO format. The dataset was obtained by employing the OIDv4 toolkit, specifically designed for scraping data from Google Open Images. Notably, this dataset exclusively focuses on face detection.
This dataset offers a highly suitable resource for training deep learning models specifically designed for face detection tasks. The images within the dataset exhibit exceptional quality and have been meticulously annotated with bounding boxes encompassing the facial regions. The annotations are provided in two formats: the original scale, denoting the pixel coordinates of the bounding boxes, and the YOLO format, representing the bounding box coordinates in normalized form.
The dataset was meticulously curated by scraping relevant images from Google Open Images through the use of the OIDv4 toolkit. Only images that are pertinent to face detection tasks have been included in this dataset. Consequently, it serves as an ideal choice for training deep learning models that specifically target face detection tasks.
Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
The images are in the ImageNet structure, with each class having its own folder containing the respective images. The images have a resolution of 256x256 pixels.
If you find this dataset useful or interesting, please don't forget to show your support by Upvoting! ππ
To create this dataset, - I searched for each PC part on Google Images and extracted the image links. - I then downloaded the full-size images from the original source and converted them to JPG format with a resolution of 256 pixels. - During the process, most images were downscaled, with only a very few being upscaled. - Finally, I manually went over all the images and deleted any that didn't fit well for image classification.
All files are named in ImageNet style. ```shell Kingdom βββ class_1 β βββ 1.jpg β βββ 2.jpg βββ class_2 β βββ 1.jpg β βββ 2.jpg βββ class_3 βββ 1.jpg βββ 2.jpg
**I have not divided the dataset into train,val,test so that you can decide on the split ratios.**
---
Photo by <a href="https://unsplash.com/@zelebb?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Andrey Matveev</a> on <a href="https://unsplash.com/photos/a-close-up-of-two-computer-fans-on-a-yellow-background-8hkotoCEI5o?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The dataset contains two classes: Shells or Pebbles. This dataset can be used to for binary classification tasks to determine whether a certain image constitutes as a shell or a pebble. Cover Image by wirestock on Freepik
I found it cool to create an app with a CV algorithm that could classify whether a certain picture is a shell or image. The next time that I would be visiting a beach, I could just use the app to help me collect either shells or pebbles. π
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The dataset contains a comprehensive collection of human activity videos, spanning across 7 distinct classes. These classes include clapping, meeting and splitting, sitting, standing still, walking, walking while reading book, and walking while using the phone.
Each video clip in the dataset showcases a specific human activity and has been labeled with the corresponding class to facilitate supervised learning.
The primary inspiration behind creating this dataset is to enable machines to recognize and classify human activities accurately. With the advent of computer vision and deep learning techniques, it has become increasingly important to train machine learning models on large and diverse datasets to improve their accuracy and robustness.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
About Dataset:
Auto-Orient: Applied
Static Crop: 30-85% Horizontal Region, 15-85% Vertical Region
Modify Classes: 0 remapped, 3 dropped
Filter Null: Require all images to contain annotations.
Use cases of this dataset:
1- Ocean cleanup efforts: Utilize the "Microplastic Dataset" computer vision model to identify and locate microplastic pollution in ocean water samples, allowing for targeted cleanup efforts and better understanding of microplastic distribution in marine environments.
2- Recycling facility improvements: Integrate the model into recycling facilities to identify and sort microplastic residues in materials, ensuring proper disposal or treatment to prevent their release into the environment.
3- Microplastic research: Aid researchers in studying the impact of microplastics on ecosystems and human health by automating the detection and analysis of microplastics in various samples, such as water, soil, or air.
4- Supply chain monitoring: Help industries monitor and evaluate their supply chain processes to identify and reduce microplastic contamination in their products or packaging materials, promoting greener manufacturing practices.
5- Consumer education and awareness: Develop a mobile app that uses the "Microplastic Dataset" model to enable users to identify potential microplastic contamination in consumer products such as cosmetics or food packaging, encouraging more informed purchasing decisions and raising public awareness on the issue of microplastic pollution.
Variables measured:
MPDS Bounding Boxes
Dataset authored and provided by:
Panats MP Project
This dataset was created by Ryan Holbrook
Released under Data files Β© Original Authors
It contains the following files:
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The dataset is structured for person object detection tasks, containing separate directories for training, validation, and testing. Each split has an images folder with corresponding images and a labels folder with annotation files.
Train Set: Contains images and annotations for model training.
Validation Set: Includes images and labels for model evaluation during training.
Test Set: Provides unseen images and labels for final model performance assessment.
Each annotation file (TXT format) corresponds to an image and likely contains bounding box coordinates and class labels. This structure follows standard object detection dataset formats, ensuring easy integration with detection models like yolo,RT-DETR.
π dataset/ βββ π train/ β βββ π images/ β β βββ πΌ image1.jpg (Training image) β β βββ πΌ image2.jpg (Training image) β βββ π labels/ β β βββ π image1.txt (Annotation for image1.jpg) β β βββ π image2.txt (Annotation for image2.jpg) β βββ π val/ β βββ π images/ β β βββ πΌ image3.jpg (Validation image) β β βββ πΌ image4.jpg (Validation image) β βββ π labels/ β β βββ π image3.txt (Annotation for image3.jpg) β β βββ π image4.txt (Annotation for image4.jpg) β βββ π test/ β βββ π images/ β β βββ πΌ image5.jpg (Test image) β β βββ πΌ image6.jpg (Test image) β βββ π labels/ β β βββ π image5.txt (Annotation for image5.jpg) β β βββ π image6.txt (Annotation for image6.jpg)
With the advent of deep learning algorithms in the medical domain, there is a need for quality and large datasets. In this work, we introduced the largest microscopic blood cell segmentation dataset and benchmarked different state-of-the-art algorithms on it. Our findings and contributions are particularly helpful for researchers working in deep learning with applications in the medical domain.
-Authors: Deponker Sarker Depto, Shazidur Rahman, Md. Mekayel Hosen, Mst Shapna Akter, Tamanna Rahman Reme, Aimon Rahman, Hasib Zunai, M. Sohel Rahman and M.R.C.Mahdy
Figure: We have presented the original blood smear image (left) and the corresponding annotated segmentation mask (right)
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F8292132%2F182b3e7f513bcee47773cc00c5cdb751%2Forig%20n%20mask.png?generation=1688965496114694&alt=media" alt="">
A total of 2656 images are available. 1328 Original blood cell images with 1328 corresponding ground truths. Out of that, Jeet B Lahiri separated the training and testing sets with 1169 images and 159 images respectively.
Distributed under the MIT License.
1- Download almost 1000 images with flags from the Open Image site 2- Determining the flag areas in the image with the label img tool to train the Yolomodel
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by FΓ©lix MORTAS
Released under MIT
Great dataset to practice object detection algorithms. Data structured in YOLOv8 format with train, validation and test datasets along with labels. Also includes the yaml file. There is also a separate test video loaded so if you want to evaluate your model using the test video, you can also use that. Evaluation can be done using the test images as well as the test video. Data is from Roboflow website.
Human-labelled products image dataset named βProducts-10k", is so far the largest production recognition dataset containing 10,000 products frequently bought by online customers in JD.com, covering a full spectrum of categories including Fashion, 3C, food, healthcare, household commodities, etc. Moreover, large-scale product labels are organized as a graph to indicate the complex hierarchy and interdependency among products.
Citing: Yalong Bai, Yuxiang Chen, Wei Yu, Linfang Wang, Wei Zhang. "Products-10K: A Large-scale Product Recognition Dataset". [arXiv]
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains 100,000 prompts generated by the custom prompt generator code here (https://www.kaggle.com/datasets/rturley/custom-prompt) and Stable Diffusion 2.0 images generated from these prompts.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
A hardware visual dataset typically refers to a collection of images or videos related to hardware components or devices. These datasets are often used in computer vision tasks such as object detection, classification, segmentation, or recognition. Here's a general description of what such a dataset might contain:
Images or Videos: The dataset would consist of either images or videos showcasing various hardware components, devices, or setups. These could include CPUs, GPUs, motherboards, RAM modules, hard drives, cooling systems, etc.
Annotations: Annotations are labels or markings provided with each image or video to indicate the presence of specific hardware components or regions of interest within the image. Annotations may include bounding boxes, pixel-level segmentation masks, or other forms of labeling.
Categories or Classes: The dataset would likely be organized into different categories or classes representing different types of hardware components or setups. For example, classes might include "CPU", "GPU", "Motherboard", "RAM", "Hard Drive", etc.
Variety: The dataset would ideally cover a wide variety of hardware components, brands, models, and configurations to ensure robustness and generalization of machine learning models trained on it.
Quality and Resolution: High-quality images or videos with sufficient resolution and clarity are essential for effective training and evaluation of computer vision models.
Data Balance: The dataset should aim for a balanced distribution of samples across different classes to prevent bias in machine learning models.
Usage Scenarios: The dataset may include images or videos captured under various lighting conditions, angles, and backgrounds to simulate real-world scenarios and challenges encountered in hardware recognition tasks.
License and Usage: Clear licensing and usage terms should be provided for the dataset, specifying how it can be used, shared, and redistributed by researchers and practitioners.
Preprocessing and Augmentation: Some datasets may include preprocessed images or provide guidelines for augmentation techniques to enhance model robustness and generalization.
Benchmarking: It's beneficial if the dataset includes benchmarking metrics or tasks to evaluate the performance of computer vision models trained on it, such as object detection accuracy, segmentation accuracy, etc.
Overall, a hardware visual dataset serves as a valuable resource for researchers, developers, and enthusiasts interested in developing and evaluating computer vision algorithms for hardware-related applications.
1000 images with 10 categories. Every 100 image belongs to a category.
1000 images with 10 categories. Every 100 image belongs to a category
Front Image is from Photo by Khachik Simonian on Unsplash
All images in the dataset is from http://wang.ist.psu.edu/docs/related/
Have fun doing Image Analysis. Have super fun doing Deep Learning with Kaggle Kernels
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Description: Car Object Detection in Road Traffic
Overview:
This dataset is designed for car object detection in road traffic scenes (Images with shape 1080x1920x3). The dataset is derived from publicly available video content on YouTube, specifically from the video with the Creative Commons Attribution license, available here.
https://youtu.be/MNn9qKG2UFI?si=uJz_WicTCl8zfrVl" alt="youtube video">
Source:
Annotation Details:
Use Cases:
Acknowledgments: We acknowledge and thank the creator of the original video for making it available under a Creative Commons Attribution license. Their contribution enables the development of datasets and research in the field of computer vision and object detection.
Disclaimer: This dataset is provided for educational and research purposes and should be used in compliance with YouTube's terms of service and the Creative Commons Attribution license.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A comprehensive dataset of X-ray images was created for bone fracture detection, specifically designed for computer vision projects. The primary goal of this dataset is to aid in developing and evaluating algorithms for automated bone fracture detection.
The dataset contains images categorized into different classes, each representing a specific type of bone fracture. These classes include Elbow Positive, Fingers Positive, Forearm Fracture, Humerus Fracture, Shoulder Fracture, and Wrist Positive.
Each image in the dataset is annotated with either bounding boxes or pixel-level segmentation masks to indicate the location and extent of the detected fracture. This facilitates the training and evaluation of bone fracture detection algorithms.
The bone fracture detection dataset is a useful resource for researchers and developers who want to train machine learning models, specifically focusing on object detection algorithms, to automatically detect and classify bone fractures in X-ray images. The dataset's diversity of fracture classes enables the development of robust models capable of accurately identifying fractures in different regions of the upper extremities.
The aim of creating this dataset is to accelerate the development of computer vision solutions for automated fracture detection, supporting advancements in medical diagnostics and improving patient care.