100+ datasets found
  1. d

    Image Annotation Services | Image Labeling for AI & ML |Computer Vision...

    • datarade.ai
    Updated Dec 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). Image Annotation Services | Image Labeling for AI & ML |Computer Vision Data| Annotated Imagery Data [Dataset]. https://datarade.ai/data-products/nexdata-image-annotation-services-ai-assisted-labeling-nexdata
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Dec 29, 2023
    Dataset authored and provided by
    Nexdata
    Area covered
    Morocco, Philippines, Qatar, Taiwan, Uzbekistan, Korea (Republic of), United States of America, Jamaica, Ireland, Montenegro
    Description
    1. Overview We provide various types of Annotated Imagery Data annotation services, including:
    2. Bounding box
    3. Polygon
    4. Segmentation
    5. Polyline
    6. Key points
    7. Image classification
    8. Image description ...
    9. Our Capacity
    10. Platform: Our platform supports human-machine interaction and semi-automatic labeling, increasing labeling efficiency by more than 30% per annotator.It has successfully been applied to nearly 5,000 projects.
    • Annotation Tools: Nexdata's platform integrates 30 sets of annotation templates, covering audio, image, video, point cloud and text.

    -Secure Implementation: NDA is signed to gurantee secure implementation and Annotated Imagery Data is destroyed upon delivery.

    -Quality: Multiple rounds of quality inspections ensures high quality data output, certified with ISO9001

    1. About Nexdata Nexdata has global data processing centers and more than 20,000 professional annotators, supporting on-demand data annotation services, such as speech, image, video, point cloud and Natural Language Processing (NLP) Data, etc. Please visit us at https://www.nexdata.ai/computerVisionTraining?source=Datarade
  2. Traffic Road Object Detection Dataset using YOLO.

    • kaggle.com
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ilyesBoukraa (2023). Traffic Road Object Detection Dataset using YOLO. [Dataset]. https://www.kaggle.com/datasets/boukraailyesali/traffic-road-object-detection-dataset-using-yolo
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ilyesBoukraa
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Description: Car Object Detection in Road Traffic

    Overview:

    This dataset is designed for car object detection in road traffic scenes (Images with shape 1080x1920x3). The dataset is derived from publicly available video content on YouTube, specifically from the video with the Creative Commons Attribution license, available here. https://youtu.be/MNn9qKG2UFI?si=uJz_WicTCl8zfrVl" alt="youtube video">

    Source:

    • Video Source: YouTube Video.
    • License: Creative Commons Attribution (reuse allowed) more details here.
    • Dataset Contents: The dataset consists of a collection of image frames extracted from the video. Each image frame captures various scenes from road traffic. Car objects within these frames are annotated with bounding boxes.

    Annotation Details:

    • Bounding Boxes: Each image frame contains annotated bounding boxes around car objects, marking their locations in the scene.
    • Classes: The dataset is focused on car object detection, and car objects are labeled as the target class (aka one class only).
    • Data Format: Images are provided in JPEG format.
    • Annotation files are provided in YOLO text format.
    • We used labelImg GUI to label this dataset in YOLO format, more details are in this GitHub repo.

    Use Cases:

    • Object Detection: This dataset can be used to train and evaluate object detection models, with an emphasis on detecting cars in road traffic scenarios.

    Acknowledgments: We acknowledge and thank the creator of the original video for making it available under a Creative Commons Attribution license. Their contribution enables the development of datasets and research in the field of computer vision and object detection.

    Disclaimer: This dataset is provided for educational and research purposes and should be used in compliance with YouTube's terms of service and the Creative Commons Attribution license.

  3. R

    Object Detection Annotations Dataset

    • universe.roboflow.com
    zip
    Updated Jun 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    farah-mohsen-samy (2025). Object Detection Annotations Dataset [Dataset]. https://universe.roboflow.com/project2-nn3gy/object-detection-annotations/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 20, 2025
    Dataset authored and provided by
    farah-mohsen-samy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    Object Detection Annotations

    ## Overview
    
    Object Detection Annotations is a dataset for object detection tasks - it contains Objects annotations for 302 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. D

    Image Annotation Tool Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Image Annotation Tool Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/image-annotation-tool-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Image Annotation Tool Market Outlook



    The global image annotation tool market size is projected to grow from approximately $700 million in 2023 to an estimated $2.5 billion by 2032, exhibiting a remarkable compound annual growth rate (CAGR) of 15.2% over the forecast period. The surging demand for machine learning and artificial intelligence applications is driving this robust market expansion. Image annotation tools are crucial for training AI models to recognize and interpret images, a necessity across diverse industries.



    One of the key growth factors fueling the image annotation tool market is the rapid adoption of AI and machine learning technologies across various sectors. Organizations in healthcare, automotive, retail, and many other industries are increasingly leveraging AI to enhance operational efficiency, improve customer experiences, and drive innovation. Accurate image annotation is essential for developing sophisticated AI models, thereby boosting the demand for these tools. Additionally, the proliferation of big data analytics and the growing necessity to manage large volumes of unstructured data have amplified the need for efficient image annotation solutions.



    Another significant driver is the increasing use of autonomous systems and applications. In the automotive industry, for instance, the development of autonomous vehicles relies heavily on annotated images to train algorithms for object detection, lane discipline, and navigation. Similarly, in the healthcare sector, annotated medical images are indispensable for developing diagnostic tools and treatment planning systems powered by AI. This widespread application of image annotation tools in the development of autonomous systems is a critical factor propelling market growth.



    The rise of e-commerce and the digital retail landscape has also spurred demand for image annotation tools. Retailers are using these tools to optimize visual search features, personalize shopping experiences, and enhance inventory management through automated recognition of products and categories. Furthermore, advancements in computer vision technology have expanded the capabilities of image annotation tools, making them more accurate and efficient, which in turn encourages their adoption across various industries.



    Data Annotation Software plays a pivotal role in the image annotation tool market by providing the necessary infrastructure for labeling and categorizing images efficiently. These software solutions are designed to handle various annotation tasks, from simple bounding boxes to complex semantic segmentation, enabling organizations to generate high-quality training datasets for AI models. The continuous advancements in data annotation software, including the integration of machine learning algorithms for automated labeling, have significantly enhanced the accuracy and speed of the annotation process. As the demand for AI-driven applications grows, the reliance on robust data annotation software becomes increasingly critical, supporting the development of sophisticated models across industries.



    Regionally, North America holds the largest share of the image annotation tool market, driven by significant investments in AI and machine learning technologies and the presence of leading technology companies. Europe follows, with strong growth supported by government initiatives promoting AI research and development. The Asia Pacific region presents substantial growth opportunities due to the rapid digital transformation in emerging economies and increasing investments in technology infrastructure. Latin America and the Middle East & Africa are also expected to witness steady growth, albeit at a slower pace, due to the gradual adoption of advanced technologies.



    Component Analysis



    The image annotation tool market by component is segmented into software and services. The software segment dominates the market, encompassing a variety of tools designed for different annotation tasks, from simple image labeling to complex polygonal, semantic, or instance segmentation. The continuous evolution of software platforms, integrating advanced features such as automated annotation and machine learning algorithms, has significantly enhanced the accuracy and efficiency of image annotations. Furthermore, the availability of open-source annotation tools has lowered the entry barrier, allowing more organizations to adopt these technologies.



    Services associated with image ann

  5. Image Annotation Services | Image Labeling for AI & ML |Computer Vision...

    • data.nexdata.ai
    Updated Aug 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2024). Image Annotation Services | Image Labeling for AI & ML |Computer Vision Data| Annotated Imagery Data [Dataset]. https://data.nexdata.ai/products/nexdata-image-annotation-services-ai-assisted-labeling-nexdata
    Explore at:
    Dataset updated
    Aug 3, 2024
    Dataset authored and provided by
    Nexdata
    Area covered
    Puerto Rico, Nicaragua, Greece, Singapore, Belgium, Colombia, Thailand, Kyrgyzstan, Japan, Croatia
    Description

    Nexdata provides high-quality Annotated Imagery Data annotation for bounding box, polygon,segmentation,polyline, key points,image classification and image description. We have handled tons of data for autonomous driving, internet entertainment, retail, surveillance and security and etc.

  6. Z

    Data from: ODDS: Real-Time Object Detection using Depth Sensors on Embedded...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shelton, Charles (2020). ODDS: Real-Time Object Detection using Depth Sensors on Embedded GPUs [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1163769
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Shelton, Charles
    Munir, Sirajum
    Mithun, Niluthpol Chowdhury
    Guo, Karen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ODDS Smart Building Depth Dataset

    Introduction:

    The goal of this dataset is to facilitate research focusing on recognizing objects in smart buildings using the depth sensor mounted at the ceiling. This dataset contains annotations of depth images for eight frequently seen object classes. The classes are: person, backpack, laptop, gun, phone, umbrella, cup, and box.

    Data Collection:

    We collected data from two settings. We had Kinect mounted at a 9.3 feet ceiling near to a 6 feet wide door. We also used a tripod with a horizontal extender holding the kinect at a similar height looking downwards. We asked about 20 volunteers to enter and exit a number of times each in different directions (3 times walking straight, 3 times walking towards left side, 3 times walking towards right side) holding objects in many different ways and poses underneath the Kinect. Each subject was using his/her own backpack, purse, laptop, etc. As a result, we considered varieties within the same object, e.g., for laptops, we considered Macbooks, HP laptops, Lenovo laptops of different years and models, and for backpacks, we considered backpacks, side bags, and purse of women. We asked the subjects to walk while holding it in many ways, e.g., for laptop, the laptop was fully open, partially closed, and fully closed while carried. Also, people hold laptops in front and side of their bodies, and underneath their elbow. The subjects carried their backpacks in their back, in their side at different levels from foot to shoulder. We wanted to collect data with real guns. However, bringing real guns to the office is prohibited. So, we obtained a few nerf guns and the subjects were carrying these guns pointing it to front, side, up, and down while walking.

    Annotated Data Description:

    The Annotated dataset is created following the structure of Pascal VOC devkit, so that the data preparation becomes simple and it can be used quickly with different with object detection libraries that are friendly to Pascal VOC style annotations (e.g. Faster-RCNN, YOLO, SSD). The annotated data consists of a set of images; each image has an annotation file giving a bounding box and object class label for each object in one of the eight classes present in the image. Multiple objects from multiple classes may be present in the same image. The dataset has 3 main directories:

    1)DepthImages: Contains all the images of training set and validation set.

    2)Annotations: Contains one xml file per image file, (e.g., 1.xml for image file 1.png). The xml file includes the bounding box annotations for all objects in the corresponding image.

    3)ImagesSets: Contains two text files training_samples.txt and testing_samples.txt. The training_samples.txt file has the name of images used in training and the testing_samples.txt has the name of images used for testing. (We randomly choose 80%, 20% split)

    UnAnnotated Data Description:

    The un-annotated data consists of several set of depth images. No ground-truth annotation is available for these images yet. These un-annotated sets contain several challenging scenarios and no data has been collected from this office during annotated dataset construction. Hence, it will provide a way to test generalization performance of the algorithm.

    Citation:

    If you use ODDS Smart Building dataset in your work, please cite the following reference in any publications: @inproceedings{mithun2018odds, title={ODDS: Real-Time Object Detection using Depth Sensors on Embedded GPUs}, author={Niluthpol Chowdhury Mithun and Sirajum Munir and Karen Guo and Charles Shelton}, booktitle={ ACM/IEEE Conference on Information Processing in Sensor Networks (IPSN)}, year={2018}, }

  7. 26 Class Object detection dataset

    • kaggle.com
    • gts.ai
    Updated Feb 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamed Gobara (2024). 26 Class Object detection dataset [Dataset]. https://www.kaggle.com/datasets/mohamedgobara/26-class-object-detection-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 6, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mohamed Gobara
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    The "26 Class Object Detection Dataset" comprises a comprehensive collection of images annotated with objects belonging to 26 distinct classes. Each class represents a common urban or outdoor element encountered in various scenarios. The dataset includes the following classes:

    Bench Bicycle Branch Bus Bushes Car Crosswalk Door Elevator Fire Hydrant Green Light Gun Motorcycle Person Pothole Rat Red Light Scooter Stairs Stop Sign Traffic Cone Train Tree Truck Umbrella Yellow Light These classes encompass a wide range of objects commonly encountered in urban and outdoor environments, including transportation vehicles, traffic signs, pedestrian-related elements, and natural features. The dataset serves as a valuable resource for training and evaluating object detection models, particularly those focused on urban scene understanding and safety applications.

  8. object-detection-person-data

    • kaggle.com
    Updated Mar 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ritesh1420 (2025). object-detection-person-data [Dataset]. https://www.kaggle.com/datasets/ritesh1420/yolov8-person-data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 29, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ritesh1420
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    The dataset is structured for person object detection tasks, containing separate directories for training, validation, and testing. Each split has an images folder with corresponding images and a labels folder with annotation files.

    Train Set: Contains images and annotations for model training.

    Validation Set: Includes images and labels for model evaluation during training.

    Test Set: Provides unseen images and labels for final model performance assessment.

    Each annotation file (TXT format) corresponds to an image and likely contains bounding box coordinates and class labels. This structure follows standard object detection dataset formats, ensuring easy integration with detection models like yolo,RT-DETR.

    Dataset Structure

    📂 dataset/ ├── 📁 train/ │ ├── 📂 images/ │ │ ├── 🖼 image1.jpg (Training image) │ │ ├── 🖼 image2.jpg (Training image) │ ├── 📂 labels/ │ │ ├── 📄 image1.txt (Annotation for image1.jpg) │ │ ├── 📄 image2.txt (Annotation for image2.jpg) │ ├── 📁 val/ │ ├── 📂 images/ │ │ ├── 🖼 image3.jpg (Validation image) │ │ ├── 🖼 image4.jpg (Validation image) │ ├── 📂 labels/ │ │ ├── 📄 image3.txt (Annotation for image3.jpg) │ │ ├── 📄 image4.txt (Annotation for image4.jpg) │ ├── 📁 test/ │ ├── 📂 images/ │ │ ├── 🖼 image5.jpg (Test image) │ │ ├── 🖼 image6.jpg (Test image) │ ├── 📂 labels/ │ │ ├── 📄 image5.txt (Annotation for image5.jpg) │ │ ├── 📄 image6.txt (Annotation for image6.jpg)

  9. R

    Phone Data Annotate Dataset

    • universe.roboflow.com
    zip
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    phone dataset annotate (2024). Phone Data Annotate Dataset [Dataset]. https://universe.roboflow.com/phone-dataset-annotate/phone-data-annotate
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    phone dataset annotate
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Phone Hand Bounding Boxes
    Description

    Phone Data Annotate

    ## Overview
    
    Phone Data Annotate is a dataset for object detection tasks - it contains Phone Hand annotations for 522 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. d

    600K+ Household Object Images | AI Training Data | Object Detection Data |...

    • datarade.ai
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Seeds, 600K+ Household Object Images | AI Training Data | Object Detection Data | Annotated imagery data | Global Coverage [Dataset]. https://datarade.ai/data-products/500k-household-object-images-ai-training-data-object-det-data-seeds
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset authored and provided by
    Data Seeds
    Area covered
    Ecuador, Congo, Brunei Darussalam, Saint Kitts and Nevis, New Caledonia, Austria, Ukraine, Kiribati, United Republic of, Serbia
    Description

    This dataset features over 600,000 high-quality images of household objects sourced from photographers worldwide. Designed to support AI and machine learning applications, it offers an extensively annotated and highly diverse collection of everyday indoor items across cultural and functional contexts.

    Key Features: 1. Comprehensive Metadata: the dataset includes full EXIF data such as aperture, ISO, shutter speed, and focal length. Each image is annotated with object labels, room context, material types, and functional categories—ideal for training models in object detection, classification, and scene understanding. Popularity metrics based on platform engagement are also included.

    1. Unique Sourcing Capabilities: images are gathered through a proprietary gamified platform featuring competitions focused on home environments and still life. This ensures a rich flow of authentic, high-quality submissions. Custom datasets can be created on-demand within 72 hours, targeting specific object categories, use-cases (e.g., kitchenware, electronics, decor), or room types.

    2. Global Diversity: contributions from over 100 countries showcase household items from a wide range of cultures, economic settings, and design aesthetics. The dataset includes everything from modern appliances and utensils to traditional tools and furnishings, captured in kitchens, bedrooms, bathrooms, living rooms, and utility spaces.

    3. High-Quality Imagery: includes images from standard to ultra-high-definition, covering both staged product-like photos and natural usage contexts. This variety supports robust training for real-world applications in cluttered or dynamic environments.

    4. Popularity Scores: each image has a popularity score based on its performance in GuruShots competitions. These scores provide valuable input for training models focused on product appeal, consumer trend detection, or aesthetic evaluation.

    5. AI-Ready Design: optimized for use in smart home applications, inventory systems, assistive technologies, and robotics. Fully compatible with major machine learning frameworks and annotation workflows.

    6. Licensing & Compliance: all data is compliant with global privacy and content use regulations, with transparent licensing for both commercial and academic applications.

    Use Cases: 1. Training AI for home inventory and recognition in smart devices and AR tools. 2. Powering assistive technologies for accessibility and elder care. 3. Enhancing e-commerce recommendation and visual search systems. 4. Supporting robotics for home navigation, object grasping, and task automation.

    This dataset provides a comprehensive, high-quality resource for training AI across smart living, retail, and assistive domains. Custom requests are welcome. Contact us to learn more!

  11. Dangerous Items Dataset for 5-Class Object Detection (YOLO annotation)

    • zenodo.org
    Updated Jul 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zbigniew Omiotek; Zbigniew Omiotek (2025). Dangerous Items Dataset for 5-Class Object Detection (YOLO annotation) [Dataset]. http://doi.org/10.5281/zenodo.16422779
    Explore at:
    Dataset updated
    Jul 30, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zbigniew Omiotek; Zbigniew Omiotek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
  12. D

    Image Annotation Service Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Oct 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Image Annotation Service Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/image-annotation-service-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Oct 5, 2024
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Image Annotation Service Market Outlook



    The global Image Annotation Service market size was valued at approximately USD 1.2 billion in 2023 and is expected to reach around USD 4.5 billion by 2032, reflecting a compound annual growth rate (CAGR) of 15.6% during the forecast period. The driving factors behind this growth include the increasing adoption of artificial intelligence (AI) and machine learning (ML) technologies across various industries, which necessitate large volumes of annotated data for accurate model training.



    One of the primary growth factors for the Image Annotation Service market is the accelerating development and deployment of AI and ML applications. These technologies depend heavily on high-quality annotated data to improve the accuracy of their predictive models. As businesses across sectors such as autonomous vehicles, healthcare, and retail increasingly integrate AI-driven solutions, the demand for precise image annotation services is anticipated to surge. For instance, autonomous vehicles rely extensively on annotated images to identify objects, pedestrians, and road conditions, thereby ensuring safety and operational efficiency.



    Another significant growth factor is the escalating use of image annotation services in healthcare. Medical imaging, which includes X-rays, MRIs, and CT scans, requires precise annotation to assist in the diagnosis and treatment of various conditions. The integration of AI in medical imaging allows for faster and more accurate analysis, leading to improved patient outcomes. This has led to a burgeoning demand for image annotation services within the healthcare sector, propelling market growth further.



    The rise of e-commerce and retail sectors is yet another critical growth driver. With the growing trend of online shopping, retailers are increasingly leveraging AI to enhance customer experience through personalized recommendations and visual search capabilities. Annotated images play a pivotal role in training AI models to recognize products, thereby optimizing inventory management and improving customer satisfaction. Consequently, the retail sector's investment in image annotation services is expected to rise significantly.



    Geographically, North America is anticipated to dominate the Image Annotation Service market owing to its well-established technology infrastructure and the presence of leading AI and ML companies. Additionally, the region's strong focus on research and development, coupled with substantial investments in AI technologies by both government and private sectors, is expected to bolster market growth. Europe and Asia Pacific are also expected to experience significant growth, driven by increasing AI adoption and the expansion of tech startups focused on AI solutions.



    Annotation Type Analysis



    The image annotation service market is segmented into several annotation types, including Bounding Box, Polygon, Semantic Segmentation, Keypoint, and Others. Each annotation type serves distinct purposes and is applied based on the specific requirements of the AI and ML models being developed. Bounding Box annotation, for example, is widely used in object detection applications. By drawing rectangles around objects of interest in an image, this method allows AI models to learn how to identify and locate various items within a scene. Bounding Box annotation is integral in applications like autonomous vehicles and retail, where object identification and localization are crucial.



    Polygon annotation provides a more granular approach compared to Bounding Box. It involves outlining objects with polygons, which offers precise annotation, especially for irregularly shaped objects. This type is particularly useful in applications where accurate boundary detection is essential, such as in medical imaging and agricultural monitoring. For instance, in agriculture, polygon annotation aids in identifying and quantifying crop health by precisely mapping the shape of plants and leaves.



    Semantic Segmentation is another critical annotation type. Unlike the Bounding Box and Polygon methods, Semantic Segmentation involves labeling each pixel in an image with a class, providing a detailed understanding of the entire scene. This type of annotation is highly valuable in applications requiring comprehensive scene analysis, such as autonomous driving and medical diagnostics. Through semantic segmentation, AI models can distinguish between different objects and understand their spatial relationships, which is vital for safe navigation in autonomous vehicles and accurate disease detectio

  13. I

    Image Tagging and Annotation Services Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Image Tagging and Annotation Services Report [Dataset]. https://www.marketresearchforecast.com/reports/image-tagging-and-annotation-services-33888
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Mar 14, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global image tagging and annotation services market is experiencing robust growth, driven by the increasing adoption of artificial intelligence (AI) and machine learning (ML) across diverse sectors. The market, estimated at $2.5 billion in 2025, is projected to expand at a Compound Annual Growth Rate (CAGR) of 18% from 2025 to 2033, reaching an estimated $10 billion by 2033. This significant expansion is fueled by several key factors. The automotive industry leverages image tagging and annotation for autonomous vehicle development, requiring vast amounts of labeled data for training AI algorithms. Similarly, the retail and e-commerce sectors utilize these services for image search, product recognition, and improved customer experiences. The healthcare industry benefits from advancements in medical image analysis, while the government and security sectors employ image annotation for surveillance and security applications. The rising availability of high-quality data, coupled with the decreasing cost of annotation services, further accelerates market growth. However, challenges remain. Data privacy concerns and the need for high-accuracy annotation can pose significant hurdles. The demand for specialized skills in data annotation also contributes to a potential bottleneck in the market's growth trajectory. Overcoming these challenges requires a collaborative approach, involving technological advancements in automation and the development of robust data governance frameworks. The market segmentation, encompassing various annotation types (image classification, object recognition/detection, boundary recognition, segmentation) and application areas (automotive, retail, BFSI, government, healthcare, IT, transportation, etc.), presents diverse opportunities for market players. The competitive landscape includes a mix of established players and emerging firms, each offering specialized services and targeting specific market segments. North America currently holds the largest market share due to early adoption of AI and ML technologies, while Asia-Pacific is anticipated to witness rapid growth in the coming years.

  14. Traffic Road Object Detection Polish 12k

    • kaggle.com
    Updated Aug 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mikołaj Kołek (2024). Traffic Road Object Detection Polish 12k [Dataset]. https://www.kaggle.com/datasets/mikoajkoek/traffic-road-object-detection-polish-12k
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 9, 2024
    Dataset provided by
    Kaggle
    Authors
    Mikołaj Kołek
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This dataset contains annotated images of Polish roads, specifically curated for object detection tasks. The data was collected using a car camera on roads in Poland, primarily in Kraków. The images capture a diverse range of scenarios, including different road types and various lighting conditions (day and night).

    Classes:

    • Car (Vehicles without a trailer)
    • Different-Traffic-Sign (Other traffic signs than warning and prohibition signs, mostly information and order signs)
    • Green-Traffic-Light (Green traffic lights for cars only; green lights for pedestrians are not annotated)
    • Motorcycle
    • Pedestrian (People and cyclists)
    • Pedestrian-Crossing (Pedestrian crossings)
    • Prohibition-Sign (All prohibition signs)
    • Red-Traffic-Light (Red traffic lights for cars only; lights for pedestrians are not annotated)
    • Speed-Limit-Sign (Speed limit signs)
    • Truck (Vehicles with a trailer)
    • Warning-Sign (Warning signs)

    Annotation Process:

    Annotations were carried out using Roboflow. A total of 2,000 images were manually labeled, while an additional 9,000 images were generated through data augmentation. The labeled techniques applied were crop, saturation, brightness, and exposure adjustments.

    Image Statistics Before Data Augmentation:

    Approximately

    • 400 cars per 100 photos
    • 30 different-traffic-signs per photos
    • 80 red-traffic-lights per photos
    • 70 pedestrians per photos
    • 50 warning signs per photos
    • 50 pedestrian-crossings per photos
    • 40 green-traffic-lights per photos
    • 40 prohibition signs per photos
    • 40 trucks per photos
    • 20 speed-limit-signs per photos
    • 2 motorcycles per photos

    The photos were taken on both normal roads and highways, under various conditions, including day and night. All photos were initially 1920x1080 pixels. After cropping, some images may be slightly smaller. No preprocessing steps were applied to the photos.

    Annotations are provided in YOLO format.

    Image Statistics Before Data Augmentation:

    SetPhotosCarDifferent-Traffic-SignRed-Traffic-LightPedestrianWarning-SignPedestrian-CrossingGreen-Traffic-LightProhibition-SignTruckSpeed-Limit-SignMotorcycle
    Test Set1666875471631377982524866224
    Train Set11784766337080581254447640239640923038
    Validation Set3271343945232228163112871121375910

    Image Statistics After Data Augmentation:

    SetPhotosCarDifferent-Traffic-SignRed-Traffic-LightPedestrianWarning-SignPedestrian-CrossingGreen-Traffic-LightProhibition-SignTruckSpeed-Limit-SignMotorcycle
    Test Set9964122328297882247449231228839613224
    Train Set7068285962022048304872326428562412237624541380228
    Validation Set1962805856701392136897867252267282235460
  15. R

    Cattle Body Parts For Object Detection Dataset

    • universe.roboflow.com
    zip
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali KHalili (2025). Cattle Body Parts For Object Detection Dataset [Dataset]. https://universe.roboflow.com/ali-khalili/cattle-body-parts-dataset-for-object-detection
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 29, 2025
    Dataset authored and provided by
    Ali KHalili
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Temp3 Bounding Boxes
    Description

    Cattle Body Parts Image Dataset for Object Detection

    This dataset is a curated collection of images featuring various cattle body parts aimed at facilitating object detection tasks. The dataset contains a total of 428 high-quality photos, meticulously annotated with three distinct classes: "Back," "Head," and "Leg."

    The dataset can be downloaded using this link. The dataset is also available at Roboflow Universe.

    A YOLOv7X model has been trained using the dataset and achieved a mAP of 99.6%. You can access the trained weights through this link.

    Motivation

    Accurate and reliable identification of different cattle body parts is crucial for various agricultural and veterinary applications. This dataset aims to provide a valuable resource for researchers, developers, and enthusiasts working on object detection tasks involving cattle, ultimately contributing to advancements in livestock management, health monitoring, and related fields.

    Data

    Overview

    • Total Images: 428
    • Classes: Back, Head, Leg
    • Annotations: Bounding boxes for each class

    Contents

    📦 Cattle_Body_Parts_OD.zip
     ┣ 📂 images
     ┃ ┣ 📜 image1.jpg
     ┃ ┣ 📜 image2.jpg
     ┃ ┗ ...
     ┗ 📂 annotations
      ┣ 📜 image1.json
      ┣ 📜 image2.json
      ┗ ...
    

    Annotation Format

    Each annotation file corresponds to an image in the dataset and is formatted as per the LabelMe JSON standard. These annotations define the bounding box coordinates for each labeled body part, enabling straightforward integration into object detection pipelines.

    License

    This work is licensed under a Creative Commons Attribution 4.0 International License.

    Disclaimer

    This dataset has been collected from publicly available sources. I do not claim ownership of the data and have no intention of infringing on any copyright. The material contained in this dataset is copyrighted to their respective owners. I have made every effort to ensure the data is accurate and complete, but I cannot guarantee its accuracy or completeness. If you believe any data in this dataset infringes on your copyright, please get in touch with me immediately so I can take appropriate action.

  16. m

    SyntheticIndoorObjectDetectionDataset

    • data.mendeley.com
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nafiz Fahad (2025). SyntheticIndoorObjectDetectionDataset [Dataset]. http://doi.org/10.17632/nnph98d3kc.2
    Explore at:
    Dataset updated
    Mar 25, 2025
    Authors
    Nafiz Fahad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was collected from the MyNursingHome dataset, available at https://data.mendeley.com/datasets/fpctx3svzd/1 , and curated to develop a synthetic indoor object detection dataset for autonomous mobile robots, or robots, for supporting researchers in detecting and classifying objects for computer vision and pattern recognition. From the original dataset containing 25 object categories, we selected six key categories—basket bin (499 images), sofa (499 images), human (499 images), table (500 images), chair (496 images), and door (500 images). Initially, we collected a total of 2,993 images from these categories; however, during the annotation process using Roboflow, we rejected 1 sofa, 10 tables, 9 chairs, and 12 door images due to quality concerns, such as poor image resolution or difficulty in identifying the object, resulting in a final dataset of 2,961 images. To ensure an effective training pipeline, we divided the dataset into 70% training (2,073 images), 20% validation (591 images), and 10% test (297 images). Preprocessing steps included auto-orientation and resizing all images to 640×640 pixels to maintain uniformity. To improve generalization for real-world applications, we applied data augmentation techniques, including horizontal and vertical flipping, 90-degree rotations (clockwise, counter-clockwise, and upside down), random rotations within -15° to +15°, shearing within ±10° horizontally and vertically, and brightness adjustments between -15% and +15%. This augmentation process expanded the dataset to 7,107 images, with 6,219 images for training (88%), 597 for validation (8%), and 297 for testing (4%). Moreover, this well-annotated, preprocessed, and augmented dataset significantly improves object detection performance in indoor settings.

  17. d

    5.5M+ Animal Images | Object Detection Data | AI Training Data | Annotated...

    • datarade.ai
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Seeds, 5.5M+ Animal Images | Object Detection Data | AI Training Data | Annotated imagery data | Global Coverage [Dataset]. https://datarade.ai/data-products/3-5m-animal-images-object-detection-data-ai-training-dat-data-seeds
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset authored and provided by
    Data Seeds
    Area covered
    Cook Islands, Bahrain, Burundi, Switzerland, Dominica, Gabon, Myanmar, Anguilla, Russian Federation, Lao People's Democratic Republic
    Description

    This dataset features over 5,500,000 high-quality images of animals sourced from photographers around the globe. Created to support AI and machine learning applications, it offers a richly diverse and precisely annotated collection of wildlife, domestic, and exotic animal imagery.

    Key Features: 1. Comprehensive Metadata: the dataset includes full EXIF data such as aperture, ISO, shutter speed, and focal length. Each image is pre-annotated with species information, behavior tags, and scene metadata, making it ideal for image classification, detection, and animal behavior modeling. Popularity metrics based on platform engagement are also included.

    1. Unique Sourcing Capabilities: the images are gathered through a proprietary gamified platform that hosts competitions on animal photography. This approach ensures a stream of fresh, high-quality content. On-demand custom datasets can be delivered within 72 hours for specific species, habitats, or behavioral contexts.

    2. Global Diversity: photographers from over 100 countries contribute to the dataset, capturing animals in a variety of ecosystems—forests, savannas, oceans, mountains, farms, and homes. It includes pets, wildlife, livestock, birds, marine life, and insects across a wide spectrum of climates and regions.

    3. High-Quality Imagery: the dataset spans from standard to ultra-high-resolution images, suitable for close-up analysis of physical features or environmental interactions. A balance of candid, professional, and artistic photography styles ensures training value for real-world and creative AI tasks.

    4. Popularity Scores: each image carries a popularity score from its performance in GuruShots competitions. This can be used to train AI models on visual appeal, species preference, or public interest trends.

    5. AI-Ready Design: optimized for use in training models in species classification, object detection, wildlife monitoring, animal facial recognition, and habitat analysis. It integrates seamlessly with major ML frameworks and annotation tools.

    6. Licensing & Compliance: all data complies with global data and wildlife imagery licensing regulations. Licenses are clear and flexible for commercial, nonprofit, and academic use.

    Use Cases: 1. Training AI for wildlife identification and biodiversity monitoring. 2. Powering pet recognition, breed classification, and animal health AI tools. 3. Supporting AR/VR education tools and natural history simulations. 4. Enhancing environmental conservation and ecological research models.

    This dataset offers a rich, high-quality resource for training AI and ML systems in zoology, conservation, agriculture, and consumer tech. Custom dataset requests are welcomed. Contact us to learn more!

  18. f

    Activities of Daily Living Object Dataset

    • figshare.com
    bin
    Updated Nov 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md Tanzil Shahria; Mohammad H Rahman (2024). Activities of Daily Living Object Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.27263424.v3
    Explore at:
    binAvailable download formats
    Dataset updated
    Nov 28, 2024
    Dataset provided by
    figshare
    Authors
    Md Tanzil Shahria; Mohammad H Rahman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Activities of Daily Living Object DatasetOverviewThe ADL (Activities of Daily Living) Object Dataset is a curated collection of images and annotations specifically focusing on objects commonly interacted with during daily living activities. This dataset is designed to facilitate research and development in assistive robotics in home environments.Data Sources and LicensingThe dataset comprises images and annotations sourced from four publicly available datasets:COCO DatasetLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation:Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft COCO: Common Objects in Context. European Conference on Computer Vision (ECCV), 740–755.Open Images DatasetLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation:Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Duerig, T., & Ferrari, V. (2020). The Open Images Dataset V6: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale. International Journal of Computer Vision, 128(7), 1956–1981.LVIS DatasetLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation:Gupta, A., Dollar, P., & Girshick, R. (2019). LVIS: A Dataset for Large Vocabulary Instance Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5356–5364.Roboflow UniverseLicense: Creative Commons Attribution 4.0 International (CC BY 4.0)License Link: https://creativecommons.org/licenses/by/4.0/Citation: The following repositories from Roboflow Universe were used in compiling this dataset:Work, U. AI Based Automatic Stationery Billing System Data Dataset. 2022. Accessible at: https://universe.roboflow.com/university-work/ai-based-automatic-stationery-billing-system-data (accessed on 11 October 2024).Destruction, P.M. Pencilcase Dataset. 2023. Accessible at: https://universe.roboflow.com/project-mental-destruction/pencilcase-se7nb (accessed on 11 October 2024).Destruction, P.M. Final Project Dataset. 2023. Accessible at: https://universe.roboflow.com/project-mental-destruction/final-project-wsuvj (accessed on 11 October 2024).Personal. CSST106 Dataset. 2024. Accessible at: https://universe.roboflow.com/personal-pgkq6/csst106 (accessed on 11 October 2024).New-Workspace-kubz3. Pencilcase Dataset. 2022. Accessible at: https://universe.roboflow.com/new-workspace-kubz3/pencilcase-s9ag9 (accessed on 11 October 2024).Finespiralnotebook. Spiral Notebook Dataset. 2024. Accessible at: https://universe.roboflow.com/finespiralnotebook/spiral_notebook (accessed on 11 October 2024).Dairymilk. Classmate Dataset. 2024. Accessible at: https://universe.roboflow.com/dairymilk/classmate (accessed on 11 October 2024).Dziubatyi, M. Domace Zadanie Notebook Dataset. 2023. Accessible at: https://universe.roboflow.com/maksym-dziubatyi/domace-zadanie-notebook (accessed on 11 October 2024).One. Stationery Dataset. 2024. Accessible at: https://universe.roboflow.com/one-vrmjr/stationery-mxtt2 (accessed on 11 October 2024).jk001226. Liplip Dataset. 2024. Accessible at: https://universe.roboflow.com/jk001226/liplip (accessed on 11 October 2024).jk001226. Lip Dataset. 2024. Accessible at: https://universe.roboflow.com/jk001226/lip-uteep (accessed on 11 October 2024).Upwork5. Socks3 Dataset. 2022. Accessible at: https://universe.roboflow.com/upwork5/socks3 (accessed on 11 October 2024).Book. DeskTableLamps Material Dataset. 2024. Accessible at: https://universe.roboflow.com/book-mxasl/desktablelamps-material-rjbgd (accessed on 11 October 2024).Gary. Medicine Jar Dataset. 2024. Accessible at: https://universe.roboflow.com/gary-ofgwc/medicine-jar (accessed on 11 October 2024).TEST. Kolmarbnh Dataset. 2023. Accessible at: https://universe.roboflow.com/test-wj4qi/kolmarbnh (accessed on 11 October 2024).Tube. Tube Dataset. 2024. Accessible at: https://universe.roboflow.com/tube-nv2vt/tube-9ah9t (accessed on 11 October 2024). Staj. Canned Goods Dataset. 2024. Accessible at: https://universe.roboflow.com/staj-2ipmz/canned-goods-isxbi (accessed on 11 October 2024).Hussam, M. Wallet Dataset. 2024. Accessible at: https://universe.roboflow.com/mohamed-hussam-cq81o/wallet-sn9n2 (accessed on 14 October 2024).Training, K. Perfume Dataset. 2022. Accessible at: https://universe.roboflow.com/kdigital-training/perfume (accessed on 14 October 2024).Keyboards. Shoe-Walking Dataset. 2024. Accessible at: https://universe.roboflow.com/keyboards-tjtri/shoe-walking (accessed on 14 October 2024).MOMO. Toilet Paper Dataset. 2024. Accessible at: https://universe.roboflow.com/momo-nutwk/toilet-paper-wehrw (accessed on 14 October 2024).Project-zlrja. Toilet Paper Detection Dataset. 2024. Accessible at: https://universe.roboflow.com/project-zlrja/toilet-paper-detection (accessed on 14 October 2024).Govorkov, Y. Highlighter Detection Dataset. 2023. Accessible at: https://universe.roboflow.com/yuriy-govorkov-j9qrv/highlighter_detection (accessed on 14 October 2024).Stock. Plum Dataset. 2024. Accessible at: https://universe.roboflow.com/stock-qxdzf/plum-kdznw (accessed on 14 October 2024).Ibnu. Avocado Dataset. 2024. Accessible at: https://universe.roboflow.com/ibnu-h3cda/avocado-g9fsl (accessed on 14 October 2024).Molina, N. Detection Avocado Dataset. 2024. Accessible at: https://universe.roboflow.com/norberto-molina-zakki/detection-avocado (accessed on 14 October 2024).in Lab, V.F. Peach Dataset. 2023. Accessible at: https://universe.roboflow.com/vietnam-fruit-in-lab/peach-ejdry (accessed on 14 October 2024).Group, K. Tomato Detection 4 Dataset. 2023. Accessible at: https://universe.roboflow.com/kkabs-group-dkcni/tomato-detection-4 (accessed on 14 October 2024).Detection, M. Tomato Checker Dataset. 2024. Accessible at: https://universe.roboflow.com/money-detection-xez0r/tomato-checker (accessed on 14 October 2024).University, A.S. Smart Cam V1 Dataset. 2023. Accessible at: https://universe.roboflow.com/ain-shams-university-byja6/smart_cam_v1 (accessed on 14 October 2024).EMAD, S. Keysdetection Dataset. 2023. Accessible at: https://universe.roboflow.com/shehab-emad-n2q9i/keysdetection (accessed on 14 October 2024).Roads. Chips Dataset. 2024. Accessible at: https://universe.roboflow.com/roads-rvmaq/chips-a0us5 (accessed on 14 October 2024).workspace bgkzo, N. Object Dataset. 2021. Accessible at: https://universe.roboflow.com/new-workspace-bgkzo/object-eidim (accessed on 14 October 2024).Watch, W. Wrist Watch Dataset. 2024. Accessible at: https://universe.roboflow.com/wrist-watch/wrist-watch-0l25c (accessed on 14 October 2024).WYZUP. Milk Dataset. 2024. Accessible at: https://universe.roboflow.com/wyzup/milk-onbxt (accessed on 14 October 2024).AussieStuff. Food Dataset. 2024. Accessible at: https://universe.roboflow.com/aussiestuff/food-al9wr (accessed on 14 October 2024).Almukhametov, A. Pencils Color Dataset. 2023. Accessible at: https://universe.roboflow.com/almas-almukhametov-hs5jk/pencils-color (accessed on 14 October 2024).All images and annotations obtained from these datasets are released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license permits sharing and adaptation of the material in any medium or format, for any purpose, even commercially, provided that appropriate credit is given, a link to the license is provided, and any changes made are indicated.Redistribution Permission:As all images and annotations are under the CC BY 4.0 license, we are legally permitted to redistribute this data within our dataset. We have complied with the license terms by:Providing appropriate attribution to the original creators.Including links to the CC BY 4.0 license.Indicating any changes made to the original material.Dataset StructureThe dataset includes:Images: High-quality images featuring ADL objects suitable for robotic manipulation.Annotations: Bounding boxes and class labels formatted in the YOLO (You Only Look Once) Darknet format.ClassesThe dataset focuses on objects commonly involved in daily living activities. A full list of object classes is provided in the classes.txt file.FormatImages: JPEG format.Annotations: Text files corresponding to each image, containing bounding box coordinates and class labels in YOLO Darknet format.How to Use the DatasetDownload the DatasetUnpack the Datasetunzip ADL_Object_Dataset.zipHow to Cite This DatasetIf you use this dataset in your research, please cite our paper:@article{shahria2024activities, title={Activities of Daily Living Object Dataset: Advancing Assistive Robotic Manipulation with a Tailored Dataset}, author={Shahria, Md Tanzil and Rahman, Mohammad H.}, journal={Sensors}, volume={24}, number={23}, pages={7566}, year={2024}, publisher={MDPI}}LicenseThis dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).License Link: https://creativecommons.org/licenses/by/4.0/By using this dataset, you agree to provide appropriate credit, indicate if changes were made, and not impose additional restrictions beyond those of the original licenses.AcknowledgmentsWe gratefully acknowledge the use of data from the following open-source datasets, which were instrumental in the creation of our specialized ADL object dataset:COCO Dataset: We thank the creators and contributors of the COCO dataset for making their images and annotations publicly available under the CC BY 4.0 license.Open Images Dataset: We express our gratitude to the Open Images team for providing a comprehensive dataset of annotated images under the CC BY 4.0 license.LVIS Dataset: We appreciate the efforts of the LVIS dataset creators for releasing their extensive dataset under the CC BY 4.0 license.Roboflow Universe:

  19. Z

    Fruit annotations in artworks

    • data.niaid.nih.gov
    Updated Mar 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geurts, Pierre (2023). Fruit annotations in artworks [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7777982
    Explore at:
    Dataset updated
    Mar 28, 2023
    Dataset provided by
    Kestemont, Mike
    Daelemans, Walter
    Geurts, Pierre
    Lasaracina, Karine
    van Hulle, Dirk
    Verbruggen, Christophe
    Kerommes
    Angenon, Els
    Paelinck
    Van de Cappelle, Lies
    Van Keer, Ellen
    Chambers, Sally
    Description

    This dataset contains annotations of 33 types of fruits in a collection of 4685 images of artworks. It was annotated in the context of the INSIGHT ("Intelligent Neural Systems as Integrated Heritage Tools") project funded by the Belgian Federal Research Agency BELSPO under the BRAIN-be program.

    Project website: https://hosting.uantwerpen.be/insight/

    The annotations have been done using the Cytomine web platform: https://cytomine.be

    The annotations are available for reuse without restrictions (CC-BY). Images are available upon request.

  20. d

    Object Detection Data| Annotated Imagery Data| Damaged Car Images | AI...

    • datarade.ai
    Updated Sep 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pixta AI (2024). Object Detection Data| Annotated Imagery Data| Damaged Car Images | AI Training Data | 2,000 Licensed & 8,000 HD Images [Dataset]. https://datarade.ai/data-products/2-annotated-imagery-data-global-damaged-car-images-2-000-pixta-ai
    Explore at:
    .json, .xml, .csv, .txtAvailable download formats
    Dataset updated
    Sep 14, 2024
    Dataset authored and provided by
    Pixta AI
    Area covered
    Thailand, Norway, Australia, New Zealand, Netherlands, Austria, Germany, Philippines, Malaysia, Canada
    Description
    1. Overview This dataset is a collection of 2,000 Licensed and 8,000 HD damaged car images that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.

    2. Use cases for damaged car images (object detection data) The 2,000 Licensed and 8,000 HD Images of damaged car could be used for various AI & Computer Vision models: Damage Inspection, Insurance Value Evaluation, Residual Value Forecast,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.

    3. Annotation Annotation is available for this dataset on demand, including:

    4. Bounding box

    5. Polygon

    6. Segmentation ...

    7. About PIXTA PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email contact@pixta.ai.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nexdata (2023). Image Annotation Services | Image Labeling for AI & ML |Computer Vision Data| Annotated Imagery Data [Dataset]. https://datarade.ai/data-products/nexdata-image-annotation-services-ai-assisted-labeling-nexdata

Image Annotation Services | Image Labeling for AI & ML |Computer Vision Data| Annotated Imagery Data

Explore at:
.bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
Dataset updated
Dec 29, 2023
Dataset authored and provided by
Nexdata
Area covered
Morocco, Philippines, Qatar, Taiwan, Uzbekistan, Korea (Republic of), United States of America, Jamaica, Ireland, Montenegro
Description
  1. Overview We provide various types of Annotated Imagery Data annotation services, including:
  2. Bounding box
  3. Polygon
  4. Segmentation
  5. Polyline
  6. Key points
  7. Image classification
  8. Image description ...
  9. Our Capacity
  10. Platform: Our platform supports human-machine interaction and semi-automatic labeling, increasing labeling efficiency by more than 30% per annotator.It has successfully been applied to nearly 5,000 projects.
  • Annotation Tools: Nexdata's platform integrates 30 sets of annotation templates, covering audio, image, video, point cloud and text.

-Secure Implementation: NDA is signed to gurantee secure implementation and Annotated Imagery Data is destroyed upon delivery.

-Quality: Multiple rounds of quality inspections ensures high quality data output, certified with ISO9001

  1. About Nexdata Nexdata has global data processing centers and more than 20,000 professional annotators, supporting on-demand data annotation services, such as speech, image, video, point cloud and Natural Language Processing (NLP) Data, etc. Please visit us at https://www.nexdata.ai/computerVisionTraining?source=Datarade
Search
Clear search
Close search
Google apps
Main menu