100+ datasets found
  1. Airbus Ships Dataset with Oriented Bounding Boxes

    • kaggle.com
    zip
    Updated Apr 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeff Faudi (2023). Airbus Ships Dataset with Oriented Bounding Boxes [Dataset]. https://www.kaggle.com/datasets/jeffaudi/airbus-ships-annotations-oriented-bounding-boxes
    Explore at:
    zip(20217059 bytes)Available download formats
    Dataset updated
    Apr 2, 2023
    Authors
    Jeff Faudi
    Description

    Dataset

    This dataset was created by Jeff Faudi

    Contents

  2. R

    Boots Oriented Bounding Box Dataset

    • universe.roboflow.com
    zip
    Updated Aug 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    roboteam (2024). Boots Oriented Bounding Box Dataset [Dataset]. https://universe.roboflow.com/roboteam/boots-oriented-bounding-box/model/5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 9, 2024
    Dataset authored and provided by
    roboteam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Box Bounding Boxes
    Description

    Boots Oriented Bounding Box

    ## Overview
    
    Boots Oriented Bounding Box is a dataset for object detection tasks - it contains Box annotations for 509 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. d

    Deep Learning Swimming Pool Oriented Bounding Boxes 2025 - Datasets -...

    • catalogue.data.wa.gov.au
    Updated Sep 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Deep Learning Swimming Pool Oriented Bounding Boxes 2025 - Datasets - data.wa.gov.au [Dataset]. https://catalogue.data.wa.gov.au/dataset/deep-learning-swimming-pool-oriented-bounding-boxes-2025
    Explore at:
    Dataset updated
    Sep 16, 2025
    Area covered
    Western Australia
    Description

    Vector dataset extracted using a deep learning oriented object detection model. Model is trained to identify and classify above and below swimming pools. Show full description

  4. R

    Oriented Bounding Boxes Dataset

    • universe.roboflow.com
    zip
    Updated Apr 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robot (2024). Oriented Bounding Boxes Dataset [Dataset]. https://universe.roboflow.com/robot-crknl/oriented-bounding-boxes-dataset/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 16, 2024
    Dataset authored and provided by
    Robot
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Robot O0Gq Bounding Boxes
    Description

    Oriented Bounding Boxes Dataset

    ## Overview
    
    Oriented Bounding Boxes Dataset is a dataset for object detection tasks - it contains Robot O0Gq annotations for 563 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. Aerial Vehicle OBB Dataset

    • kaggle.com
    zip
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mridankan Mandal (2025). Aerial Vehicle OBB Dataset [Dataset]. https://www.kaggle.com/datasets/redzapdos123/aerial-vehicle-obb-dataset
    Explore at:
    zip(11517085012 bytes)Available download formats
    Dataset updated
    Aug 29, 2025
    Authors
    Mridankan Mandal
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Aerial Vehicles OBB Dataset (YOLOv11-OBB Format):

    A large scale, merged dataset for oriented vehicle detection in aerial imagery, preformatted for YOLOv11-OBB models.

    Overview:

    This dataset combines three distinct aerial imagery collections—**VSAI**, DroneVehicles, and DIOR-R, into a unified resource for training and benchmarking oriented object detection models. It has been specifically preprocessed and formatted for use with Ultralytics' YOLOv11-OBB models.

    The primary goal is to provide a detailed dataset for tasks like aerial surveillance, traffic monitoring, and vehicle detection from a drone's perspective. All annotations have been converted to the YOLO OBB format, and the classes have been simplified for focused vehicle detection tasks.

    Key Features:

    • Merged & Simplified: Combines three popular aerial vehicle datasets.
    • Two Class System: Simplifies detection by categorizing all objects into small-vehicle and large-vehicle.
    • YOLOv11-OBB Ready: Preformatted with normalized OBB annotations and a data.yaml configuration file for immediate use in YOLO training pipelines.
    • Cleaned & Split: Empty annotations have been removed, and the data is organized into standard train, validation, and test sets.

    Data Description:

    Source Datasets:

    1. VSAI Dataset: Contains aerial imagery for traffic analysis by DroneVision.
    2. DroneVehicles Dataset: A collection of vehicle images from a drone's perspective, originally provided in YOLO OBB format.
    3. DIOR-R Dataset: A large scale benchmark for object detection in optical remote sensing images. Only the 'vehicle' class was extracted for this merged dataset.

    Preprocessing and Modifications:

    • Class Merging: All vehicle types from the source datasets were mapped to two parent classes: small-vehicle and large-vehicle. The vehicle class from the DIOR-R dataset was mapped to large-vehicle.
    • Data Cleaning: Image and label pairs with empty annotation files were removed to ensure dataset integrity.
    • Formatting: All annotations were converted to the YOLOv11-OBB format, with coordinates normalized between 0 and 1.

    Classes:

    Class IDClass NameSource Dataset(s)
    0small-vehicleVSAI, DroneVehicles
    1large-vehicleVSAI, DroneVehicles, DIOR-R

    Dataset Statistics:

    • Total Labeled Images: 29,125
      • Training Set: 18,274 images
      • Validation Set: 5,420 images
      • Test Set: 5,431 images

    Annotation Format:

    Each image has a corresponding .txt label file. Each line in the file represents one object in the YOLOv11-OBB format: class_id x1 y1 x2 y2 x3 y3 x4 y4

    • class_id: The class index (0 for small-vehicle, 1 for large-vehicle).
    • (x1, y1)...(x4, y4): The four corner points of the oriented bounding box, with all coordinates normalized to a range of [0, 1].

    File and Folder Structure:

    The dataset is organized into a standard YOLO directory structure for easy integration with training programs.

    RoadVehiclesYOLOOBBDataset/
    ├── train/
    │  ├── images/ #18,274 images
    │  └── labels/ #18,274 labels
    ├── val/
    │  ├── images/ #5,420 images
    │  └── labels/ #5,420 labels
    ├── test/
    │  ├── images/ #5,431 images
    │  └── labels/ #5,431 labels
    ├── data.yaml  #YOLO dataset configuration file.
    └── ReadMe.md  #Documentation
    

    Usage:

    To use this dataset with YOLOv11 or other compatible frameworks, simply point your training script to the included data.yaml file.

    Example data.yaml:

    #Dataset configuration.
    path: RoadVehiclesYOLOOBBDataset/
    train: train/images
    val: val/images
    test: test/images
    
    #Number of classes.
    nc: 2
    
    #Class names.
    names:
     0: small-vehicle
     1: large-vehicle
    

    License:

    This merged dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0), which is the most restrictive license among its sources.

    • You are free to:
      • Share and adapt the material for any non-commercial purpose.
    • Under the following terms:
      • Attribution: You must give appropriate credit to the original authors and the creator of this merged dataset.
      • NonCommercial: You may not use the material for commercial purposes.
      • ShareAlike: If you remix, transform, or build upon the material, you must distribute your contributions under the same license.

    Citation and Attribution:

    When using this dataset, please provide attribution to all original sources as follows:

    - VSAI_Dataset: by DroneVision, licensed under CC BY-NC-SA 4.0.
    - DroneVehicles Dataset: by Yiming Sun, Bing Cao, Pengfei Zhu, and Qin G. Hu and modified by Mridankan Mandal, licensed under CC BY-NC-SA 4.0.
    - DIOR-R dataset: by the DIOR...
    
  6. R

    License Plates Obb Dataset

    • universe.roboflow.com
    zip
    Updated Apr 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexey Filipov (2024). License Plates Obb Dataset [Dataset]. https://universe.roboflow.com/alexey-filipov-zcpdf/license-plates-obb
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2024
    Dataset authored and provided by
    Alexey Filipov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cars License Plate 3Vv6 Bounding Boxes
    Description

    License Plates OBB

    ## Overview
    
    License Plates OBB is a dataset for object detection tasks - it contains Cars License Plate 3Vv6 annotations for 434 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  7. Eagle Dataset (YOLOv11-OBB Format)

    • kaggle.com
    zip
    Updated Jul 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mridankan Mandal (2025). Eagle Dataset (YOLOv11-OBB Format) [Dataset]. https://www.kaggle.com/datasets/redzapdos123/eagle-dataset-yolov11-obb-format/code
    Explore at:
    zip(3192683527 bytes)Available download formats
    Dataset updated
    Jul 29, 2025
    Authors
    Mridankan Mandal
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Eagle Dataset - YOLOv11 OBB for Vehicle Detection

    High-resolution aerial imagery with 16,000+ oriented bounding boxes for vehicle detection, pre-formatted for Ultralytics YOLOv11.

    Context

    This dataset is a ready-to-use version of the original Eagle Dataset from the German Aerospace Center (DLR). The original dataset was created to benchmark object detection models on challenging aerial imagery, featuring vehicles at various orientations.

    This version has been converted to the YOLOv11-OBB (Oriented Bounding Box) format. The conversion makes the dataset directly compatible with modern deep learning frameworks like Ultralytics YOLO, allowing researchers and developers to train state-of-the-art object detectors with minimal setup.

    The dataset is ideal for tasks requiring precise localization of rotated objects, such as vehicle detection in parking lots, traffic monitoring, and urban planning from aerial viewpoints.

    Content

    The dataset is split into training, validation, and test sets, following a standard structure for computer vision tasks.

    Dataset Split & Counts:

    • Training Set: 159 images and labels
    • Validation Set: 53 images and labels
    • Test Set: 106 images and labels

    Directory Structure:

    EagleDatasetYOLO/
    ├── train/
    │  ├── images/   # 159 images
    │  └── labels/   # 159 .txt obb labels
    ├── val/
    │  ├── images/   # 53 images
    │  └── labels/   # 53 .txt obb labels
    ├── test/
    │  ├── images/   # 106 images
    │  └── labels/   # 106 .txt obb labels
    ├── data.yaml
    └── license.md
    

    Annotation Format (YOLOv11-OBB):

    Each .txt label file contains one object per line. The format for each object is: <class_id> <x_center> <y_center> <width> <height> <angle>

    • <class_id>: The class index (in this case, 0 for 'vehicle').
    • <x_center> <y_center>: The normalized center coordinates of the bounding box.
    • <width> <height>: The normalized width and height of the bounding box.
    • <angle>: The rotation angle of the box in radians, from -π/2 to π/2.

    data.yaml Configuration:

    A data.yaml file is included for easy integration with the Ultralytics framework.

    path: ../EagleDatasetYOLO
    train: train/images
    val: val/images
    test: test/images
    
    nc: 1
    names: ['vehicle']
    

    Acknowledgements and License

    This dataset is a conversion of the original work by the German Aerospace Center (DLR). The conversion to YOLOv11-OBB format was performed by Mridankan Mandal.

    The dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0).

    If you use this dataset in your research, please cite the original creators and acknowledge the conversion work.

  8. MaxiDent-OBBox Maxillary Oriented Bounded Box

    • kaggle.com
    zip
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    maxxxw (2025). MaxiDent-OBBox Maxillary Oriented Bounded Box [Dataset]. https://www.kaggle.com/datasets/trickykestral/maxident-bbox-maxillary-tooth-bounded-box-dataset
    Explore at:
    zip(482283573 bytes)Available download formats
    Dataset updated
    Jun 9, 2025
    Authors
    maxxxw
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This is a ready-to-use dataset consisting of X-ray images of the human jaw, with corresponding annotations for individual teeth. Each tooth is labeled using oriented bounding box (OBB) coordinates, making the dataset well-suited for tasks that require precise object localization and orientation awareness. There are a total of 17 classes representing teeth in upper jaw

    The annotations are formatted specifically for compatibility with YOLO-OBB (Oriented Bounding Box) models, enabling seamless integration into training pipelines for dental detection and analysis tasks.

  9. f

    Data from: Multi-scale spatial fusion lightweight model for optical remote...

    • tandf.figshare.com
    png
    Updated Oct 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qiyi He; Ao Xu; Zhiwei Ye; Shirui Sheng; Wen Zhou; Xudong Lai (2025). Multi-scale spatial fusion lightweight model for optical remote sensing image-based small object detection [Dataset]. http://doi.org/10.6084/m9.figshare.30328707.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    Oct 10, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Qiyi He; Ao Xu; Zhiwei Ye; Shirui Sheng; Wen Zhou; Xudong Lai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Current remote sensing object detection frameworks often focus solely on the geometric relationship between true and predicted boxes, neglecting the intrinsic shapes of the boxes. In the field of remote sensing detection, there are numerous elongated bounding boxes. Variations in the shape and size of these boxes result in differences in their Intersection over Union (IoU) values, which is particularly noticeable when detecting small objects. Platforms with limited resources, such as satellites and unmanned drones, have strict requirements for detector storage space and computational complexity. This makes it challenging for existing methods to balance detection performance and computational demands. Therefore, this paper presents RS-YOLO, a lightweight framework that enhances You Only Look Once (YOLO) and is specifically designed for deployment on resource-limited platforms. RS-YOLO has developed a bounding box regression approach for remote sensing images, focusing on the shape and scale of the boundary boxes. Additionally, to improve the integration of multi-scale spatial features, RS-YOLO introduces a lightweight multi-scale hybrid attention module for cross-space fusion. The DOTA-v1.0 and HRSC2016 datasets were used to test our model, which was then compared to multiple state-of-the-art oriented object detection models. The results indicate that the detector introduced in this article achieves top performance while being lightweight and suitable for deployment on resource-limited platforms.

  10. Z

    Data from: DeepScoresV2

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tuggener, Lukas; Satyawan, Yvan Putra; Pacha, Alexander; Schmidhuber, Jürgen; Stadelmann, Thilo (2023). DeepScoresV2 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4012192
    Explore at:
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    TU Wien
    ZHAW Datalab
    ZHAW Datalab & USi
    The Swiss AI Lab IDSIA (USI & SUPSI)
    Authors
    Tuggener, Lukas; Satyawan, Yvan Putra; Pacha, Alexander; Schmidhuber, Jürgen; Stadelmann, Thilo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The DeepScoresV2 Dataset for Music Object Detection contains digitally rendered images of written sheet music, together with the corresponding ground truth to fit various types of machine learning models. A total of 151 Million different instances of music symbols, belonging to 135 different classes are annotated. The total Dataset contains 255,385 Images. For most researches, the dense version, containing 1714 of the most diverse and interesting images, should suffice.

    The dataset contains ground in the form of:

    Non-oriented bounding boxes

    Oriented bounding boxes

    Semantic segmentation

    Instance segmentation

    The accompaning paper The DeepScoresV2 Dataset and Benchmark for Music Object Detection published at ICPR2020 can be found here:

    https://digitalcollection.zhaw.ch/handle/11475/20647

    A toolkit for convenient loading and inspection of the data can be found here:

    https://github.com/yvan674/obb_anns

    Code to train baseline models can be found here:

    https://github.com/tuggeluk/mmdetection/tree/DSV2_Baseline_FasterRCNN

    https://github.com/tuggeluk/DeepWatershedDetection/tree/dwd_old

  11. h

    ExposureEngine

    • huggingface.co
    Updated Aug 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SimulaMet HOST Department (2025). ExposureEngine [Dataset]. https://huggingface.co/datasets/SimulaMet-HOST/ExposureEngine
    Explore at:
    Dataset updated
    Aug 26, 2025
    Dataset authored and provided by
    SimulaMet HOST Department
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    ExposureEngine: Oriented Logo Detection & Sponsor Visibility Analytics (Dataset)

    Paper | Project Page

    Rotation-aware OBB annotations for sponsor logos in professional soccer broadcasts — built for sports analytics, YOLO OBB training, and sponsorship measurement.
    
    
    
    Oriented Bounding Boxes
    Sports Broadcasts
    Sponsorship Analytics
    YOLOv8/YOLOv11 OBB
    Brand Visibility
    
    
    
    ExposureEngine provides high-quality oriented bounding box (OBB) polygon… See the full description on the dataset page: https://huggingface.co/datasets/SimulaMet-HOST/ExposureEngine.
    
  12. FAIR1M Satellite Imagery for Object Detection

    • kaggle.com
    zip
    Updated Dec 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olly Powell (2023). FAIR1M Satellite Imagery for Object Detection [Dataset]. https://www.kaggle.com/datasets/ollypowell/fair1m-satellite-imagery-for-object-detection/data
    Explore at:
    zip(9190024314 bytes)Available download formats
    Dataset updated
    Dec 10, 2023
    Authors
    Olly Powell
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Description

    EDA notebook here.

    The paper for this dataset is found here, the dataset was used in the Gaofen Challenge hosted by the Aerospace Information Research Institute, Chinese Academy of Sciences.

    I have put this together because a few months ago I had a project that needed such a dataset for vehicle detection, and found there wasn't much out there with suitable resolution and quality. I ended up using the xView1 Dataset, which was pretty good, but noted at the time the FAIR1M had a lot of potential too.

    It's main points of difference of FAIR1M compared to many others in this space are: - Some geographical diversity: Asia, Europe, North America, Capetown, Sydney. Mostly Urban - Oriented bounding boxes - Most of the imagery is high resolution: 0.3m or 0.6m, which makes it just enough for small car detection.

    For comparison, xView-1 is larger and more geographically diverse, but has flat bounding boxes. If you want to try oriented bounding boxes, FAIR1M is worth a try.

    I could only find 240,852 spatially unique labels, the rest seem to be duplicates due to overlapping imagery. Though some of course would be in the hidden test set, which has not been made public. Anyway, that's still a lot of labels, so thanks to the organisers for making these available.

  13. D

    DOTA Dataset

    • datasetninja.com
    • kaggle.com
    Updated Feb 25, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Ding; Nan Xue; Gui-Song Xia (2021). DOTA Dataset [Dataset]. https://datasetninja.com/dota
    Explore at:
    Dataset updated
    Feb 25, 2021
    Dataset provided by
    Dataset Ninja
    Authors
    Jian Ding; Nan Xue; Gui-Song Xia
    License

    https://captain-whu.github.io/DOTA/dataset.htmlhttps://captain-whu.github.io/DOTA/dataset.html

    Description

    In the past decade, significant progress in object detection has been made in natural images, but authors of the DOTA v2.0: Dataset of Object deTection in Aerial images note that this progress hasn't extended to aerial images. The main reason for this discrepancy is the substantial variations in object scale and orientation caused by the bird's-eye view of aerial images. One major obstacle to the development of object detection in aerial images (ODAI) is the lack of large-scale benchmark datasets. The DOTA dataset contains 1,793,658 object instances spanning 18 different categories, all annotated with oriented bounding box annotations (OBB). These annotations were collected from a total of 11,268 aerial images. Using this extensive and meticulously annotated dataset, the authors establish baselines covering ten state-of-the-art algorithms, each with over 70 different configurations. These configurations are evaluated for both speed and accuracy performance.

  14. VSAI Dataset (YOLO11-OBB format)

    • kaggle.com
    zip
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mridankan Mandal (2025). VSAI Dataset (YOLO11-OBB format) [Dataset]. https://www.kaggle.com/datasets/redzapdos123/vsai-dataset-yolo11-obb-format/code
    Explore at:
    zip(8332516716 bytes)Available download formats
    Dataset updated
    Aug 29, 2025
    Authors
    Mridankan Mandal
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    VSAI Aerial Vehicle Detection Dataset (YOLO OBB Format):

    A cleaned, and reformatted version of the VSAI Dataset, specifically adapted for Oriented Bounding Box (OBB) vehicle detection using the YOLOv11 format.

    Overview:

    This dataset is designed for aerial/drone-based vehicle detection tasks. It is a modified version of the original VSAI Dataset v1 by the DroneVision Team. This version has been modified by Mridankan Mandal for the easy of training object detection models like the YOLO11-OBB models.

    The dataset is split into two classes: small-vehicle and large-vehicle. All annotations have been converted to the YOLOv11-OBB format, and the data is organized into training, validation, and testing sets.

    Key Features and Modifications:

    This dataset improves upon the original by incorporating several key modifications to make it more accessible and useful for modern computer vision tasks:

    • Format Conversion: The annotations have been converted to the YOLOv11-OBB format, which uses four corner points to define an oriented bounding box.
    • Data Cleaning: All image and annotation pairs where the label file was empty have been removed to ensure dataset quality.
    • Structured Splits: The dataset is pre-split into train (80%), validation (10%), and test (10%) sets, with the following image counts:
      • Train: 4,297 images
      • Validation: 537 images
      • Test: 538 images
      • Total: 5,372 images
    • Coordinate Normalization: All bounding box coordinates are normalized to a range of [0.0 - 1.0], making them ready for training without preprocessing.

    Directory Structure

    The dataset is organized in a standard YOLO format for easy integration with popular training frameworks.

    YOLOOBBVSAIDataset/
    ├── train/
    │  ├── images/   #Contains 4,297 image files.
    │  └── labels/   #Contains 4,297 .txt label files.
    ├── val/
    │  ├── images/   #Contains 537 image files.
    │  └── labels/   #Contains 537 .txt label files.
    ├── test/
    │  ├── images/   #Contains 538 image files.
    │  └── labels/   #Contains 538 .txt label files.
    ├── data.yaml    #Dataset configuration file.
    ├── license.md   #Full license details.
    └── ReadMe.md    #Dataset README file.
    

    Annotation Format:

    Each .txt label file contains one or more lines, with each line representing a single object in the YOLOv11-OBB format:

    class_id x1 y1 x2 y2 x3 y3 x4 y4

    • class_id: An integer representing the object class (0 for small-vehicle, 1 for large-vehicle).
    • (x1, y1)...(x4, y4): The four corner points of the oriented bounding box, with coordinates normalized between 0 and 1.

    data.yaml:

    To begin training a YOLO model with this dataset, you can use the provided data.yaml file. Simply update the path to the location of the dataset on your local machine.

    #The path to the root dataset directory.
    path: /path/to/YOLOOBBVSAIDataset/
    train: train/images
    val: val/images
    test: test/images
    
    #Number of classes.
    nc: 2
    
    #The Class names,
    names:
     0: small-vehicle
     1: large-vehicle
    

    License and Attribution:

    This dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.

    • You are free to: Use, modify, and redistribute this dataset for non-commercial research and educational purposes.
    • You must: Provide proper attribution to both the original creators and the modifier, and release any derivative works under the same license.

    Proper Attribution:

    When using this dataset, attribute as follows:

    • Original VSAI Dataset v1 by DroneVision Team, licensed under CC BY-NC-SA 4.0.
    • Modified VSAI Dataset (YOLOv11-OBB Format) by Mridankan Mandal, licensed under CC BY-NC-SA 4.0.

    Citation:

    If you use this dataset in your research, use the following BibTeX entry to cite it:

    @dataset{vsai_yolo_obb_2025,
     title={VSAI Dataset (YOLOv11-OBB Format)},
     author={Mridankan Mandal},
     year={2025},
     note={Modified from original VSAI v1 dataset by DroneVision},
     license={CC BY-NC-SA 4.0}
    }
    
  15. m

    Crosswalks

    • geodot.mass.gov
    • geo-massdot.opendata.arcgis.com
    • +2more
    Updated Sep 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Massachusetts geoDOT (2024). Crosswalks [Dataset]. https://geodot.mass.gov/datasets/crosswalks
    Explore at:
    Dataset updated
    Sep 17, 2024
    Dataset authored and provided by
    Massachusetts geoDOT
    Area covered
    Description

    The crosswalk polygons can be utilized for safety, mobility, and other analyses. This model builds upon YOLOv8 and incorporates oriented bounding boxes (OBB), enhancing detection accuracy by precisely marking crosswalks regardless of their orientations. Various strategies are adopted to enhance the baseline YOLOv8 model, including Convolutional Block Attention, a dual-branch Spatial Pyramid Pooling-Fast module, and cosine annealing. We have also developed a dataset comprising over 23,000 annotated instances of crosswalks to train and validate the proposed model and its variations. The best-performing model achieves a precision of 96.5% and a recall of 93.3% on data collected in Massachusetts, demonstrating its accuracy and efficiency.From the MassGIS website, we downloaded images for 2019 and 2021. The image dataset for each year comprises over 10,000 high-resolution images (tiles). Each image has 100 million pixels (10,000 x 10,000 pixels), and each pixel represents about 6 inches (15 centimeters) on the ground. This resolution provides sufficient detail for identifying small-sized features such as pedestrian crosswalks.

  16. DroneVehicle Dataset (YOLOv11-OBB)

    • kaggle.com
    zip
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mridankan Mandal (2025). DroneVehicle Dataset (YOLOv11-OBB) [Dataset]. https://www.kaggle.com/datasets/redzapdos123/dronevehicle-dataset-yolov11-obb/code
    Explore at:
    zip(964857267 bytes)Available download formats
    Dataset updated
    Aug 29, 2025
    Authors
    Mridankan Mandal
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    DroneVehicle Dataset (YOLOv11-OBB Format):

    A class merged and cleaned version of the DroneVehicle dataset, specifically formatted for Oriented Bounding Box (OBB) detection using the YOLOv11 framework.

    Overview:

    This dataset is designed for aerial and drone-based vehicle detection tasks that require identifying vehicles with precise rotation and orientation. It is a modified and restructured version of the original DroneVehicle dataset, which was introduced by Yiming Sun, Bing Cao, Pengfei Zhu, and Qin G. Hu. This version was adapted by Mridankan Mandal to facilitate easy training with YOLOv11-OBB models.

    The original classes have been merged into two simplified categories: small-vehicle (car, van) and large-vehicle (bus, truck, freight car).

    Dataset Content and Structure:

    The dataset contains a total of 17,325 images. It is pre-split into training, validation, and test sets to ensure standardized evaluation.

    • Total Images: 17,325
    • Splits and Counts:
      • Train: 12,118 images
      • Validation: 2,599 images
      • Test: 2,608 images
    • Classes:
      • 0: small-vehicle
      • 1: large-vehicle

    The data is organized in the following directory structure:

    DroneVehicleYOLOv11OBB/
    ├── train/
    │  ├── images/   #12,118 image files.
    │  └── labels/   #12,118 YOLOv11-OBB .txt files.
    ├── val/
    │  ├── images/   #2,599 image files.
    │  └── labels/   #2,599 label files.
    ├── test/
    │  ├── images/   #2,608 image files.
    │  └── labels/   #2,608 label files.
    ├── data.yaml    #Dataset configuration file.
    ├── license.md   #Full license terms.
    └── ReadMe.md    #This file.
    

    Annotation Format:

    Each .txt label file contains one or more lines, with each line representing a single object in the YOLOv11-OBB format:

    class_id x1 y1 x2 y2 x3 y3 x4 y4

    • class_id: An integer representing the object class (0 for small-vehicle, 1 for large-vehicle).
    • (x1, y1)...(x4, y4): The four corner points of the oriented bounding box, with coordinates normalized between 0 and 1.

    The data.yaml:

    To use this dataset with a YOLO framework, you can use the provided data.yaml file. It specifies the dataset paths and class information.

    path: DroneVehiclesDatasetYOLO/ #Path to the root dataset directory.
    train: train/images       #Training images subdirectory.
    val: val/images         #Validation images subdirectory.
    test: test/images        #Test images subdirectory.
    
    #Number of classes.
    nc: 2            
    
    #The Class names.  
    names:             
     0: small-vehicle    
     1: large-vehicle       
    

    License and Attribution:

    This dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.

    Permissions:

    • You are free to use this dataset for non-commercial research, academic, and educational purposes.
    • You may modify and redistribute the dataset, provided you give appropriate credit and apply the same license to derivative works.

    Requirements:

    • You must provide attribution to the primary authors (Yiming Sun, Bing Cao, Pengfei Zhu, Qin G. Hu).
    • Any derivative works must also be shared under the CC BY-NC-SA 4.0 license or a more restrictive one.

    Restrictions:

    • Commercial use is not permitted without explicit permission from the original authors.
    • You may not remove or alter the license terms or attribution notices.

    Proper Attribution:

    When using this dataset, you must include the following attributions:

    • Drone-based RGB-Infrared Cross-Modality Vehicle Detection via Uncertainty-Aware Learning by Yiming Sun, Bing Cao, Pengfei Zhu, and Qin G. Hu, licensed under CC BY-NC-SA 4.0.
    • Modified DroneVehicle Dataset (YOLOv11-OBB Format) by Mridankan Mandal, licensed under CC BY-NC-SA 4.0.

    Acknowledgements:

    Special thanks to Yiming Sun, Bing Cao, Pengfei Zhu, and Qin G. Hu for creating and sharing the original DroneVehicle dataset, which formed the foundation for this work.

  17. R

    Building Facades Dataset

    • universe.roboflow.com
    zip
    Updated Oct 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    gsvvilavelhaplus (2024). Building Facades Dataset [Dataset]. https://universe.roboflow.com/gsvvilavelhaplus/building-facades/model/5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 17, 2024
    Dataset authored and provided by
    gsvvilavelhaplus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Building Facades Bounding Boxes
    Description

    This dataset is a project created to aid in land use classification of properties based on their facades on the streets. It is a bounding box object detection oriented dataset, but the objective is to try semi-supervised techniques to utilize the fewer annotated image examples as possible.

  18. t

    HRSC2016 - Dataset - LDM

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). HRSC2016 - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/hrsc2016
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    The HRSC2016 dataset is a high resolution ship recognition dataset annotated with oriented bounding boxes which contains 1061 images, and the image size ranges from 300 × 300 to 1500 × 900.

  19. Z

    ActiveHuman Part 1

    • data.niaid.nih.gov
    Updated Nov 14, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Charalampos Georgiadis (2023). ActiveHuman Part 1 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8359765
    Explore at:
    Dataset updated
    Nov 14, 2023
    Dataset provided by
    Aristotle University of Thessaloniki (AUTh)
    Authors
    Charalampos Georgiadis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is Part 1/2 of the ActiveHuman dataset! Part 2 can be found here. Dataset Description ActiveHuman was generated using Unity's Perception package. It consists of 175428 RGB images and their semantic segmentation counterparts taken at different environments, lighting conditions, camera distances and angles. In total, the dataset contains images for 8 environments, 33 humans, 4 lighting conditions, 7 camera distances (1m-4m) and 36 camera angles (0-360 at 10-degree intervals). The dataset does not include images at every single combination of available camera distances and angles, since for some values the camera would collide with another object or go outside the confines of an environment. As a result, some combinations of camera distances and angles do not exist in the dataset. Alongside each image, 2D Bounding Box, 3D Bounding Box and Keypoint ground truth annotations are also generated via the use of Labelers and are stored as a JSON-based dataset. These Labelers are scripts that are responsible for capturing ground truth annotations for each captured image or frame. Keypoint annotations follow the COCO format defined by the COCO keypoint annotation template offered in the perception package.

    Folder configuration The dataset consists of 3 folders:

    JSON Data: Contains all the generated JSON files. RGB Images: Contains the generated RGB images. Semantic Segmentation Images: Contains the generated semantic segmentation images.

    Essential Terminology

    Annotation: Recorded data describing a single capture. Capture: One completed rendering process of a Unity sensor which stored the rendered result to data files (e.g. PNG, JPG, etc.). Ego: Object or person on which a collection of sensors is attached to (e.g., if a drone has a camera attached to it, the drone would be the ego and the camera would be the sensor). Ego coordinate system: Coordinates with respect to the ego. Global coordinate system: Coordinates with respect to the global origin in Unity. Sensor: Device that captures the dataset (in this instance the sensor is a camera). Sensor coordinate system: Coordinates with respect to the sensor. Sequence: Time-ordered series of captures. This is very useful for video capture where the time-order relationship of two captures is vital. UIID: Universal Unique Identifier. It is a unique hexadecimal identifier that can represent an individual instance of a capture, ego, sensor, annotation, labeled object or keypoint, or keypoint template.

    Dataset Data The dataset includes 4 types of JSON annotation files files:

    annotation_definitions.json: Contains annotation definitions for all of the active Labelers of the simulation stored in an array. Each entry consists of a collection of key-value pairs which describe a particular type of annotation and contain information about that specific annotation describing how its data should be mapped back to labels or objects in the scene. Each entry contains the following key-value pairs:

    id: Integer identifier of the annotation's definition. name: Annotation name (e.g., keypoints, bounding box, bounding box 3D, semantic segmentation). description: Description of the annotation's specifications. format: Format of the file containing the annotation specifications (e.g., json, PNG). spec: Format-specific specifications for the annotation values generated by each Labeler.

    Most Labelers generate different annotation specifications in the spec key-value pair:

    BoundingBox2DLabeler/BoundingBox3DLabeler:

    label_id: Integer identifier of a label. label_name: String identifier of a label. KeypointLabeler:

    template_id: Keypoint template UUID. template_name: Name of the keypoint template. key_points: Array containing all the joints defined by the keypoint template. This array includes the key-value pairs:

    label: Joint label. index: Joint index. color: RGBA values of the keypoint. color_code: Hex color code of the keypoint skeleton: Array containing all the skeleton connections defined by the keypoint template. Each skeleton connection defines a connection between two different joints. This array includes the key-value pairs:

    label1: Label of the first joint. label2: Label of the second joint. joint1: Index of the first joint. joint2: Index of the second joint. color: RGBA values of the connection. color_code: Hex color code of the connection. SemanticSegmentationLabeler:

    label_name: String identifier of a label. pixel_value: RGBA values of the label. color_code: Hex color code of the label.

    captures_xyz.json: Each of these files contains an array of ground truth annotations generated by each active Labeler for each capture separately, as well as extra metadata that describe the state of each active sensor that is present in the scene. Each array entry in the contains the following key-value pairs:

    id: UUID of the capture. sequence_id: UUID of the sequence. step: Index of the capture within a sequence. timestamp: Timestamp (in ms) since the beginning of a sequence. sensor: Properties of the sensor. This entry contains a collection with the following key-value pairs:

    sensor_id: Sensor UUID. ego_id: Ego UUID. modality: Modality of the sensor (e.g., camera, radar). translation: 3D vector that describes the sensor's position (in meters) with respect to the global coordinate system. rotation: Quaternion variable that describes the sensor's orientation with respect to the ego coordinate system. camera_intrinsic: matrix containing (if it exists) the camera's intrinsic calibration. projection: Projection type used by the camera (e.g., orthographic, perspective). ego: Attributes of the ego. This entry contains a collection with the following key-value pairs:

    ego_id: Ego UUID. translation: 3D vector that describes the ego's position (in meters) with respect to the global coordinate system. rotation: Quaternion variable containing the ego's orientation. velocity: 3D vector containing the ego's velocity (in meters per second). acceleration: 3D vector containing the ego's acceleration (in ). format: Format of the file captured by the sensor (e.g., PNG, JPG). annotations: Key-value pair collections, one for each active Labeler. These key-value pairs are as follows:

    id: Annotation UUID . annotation_definition: Integer identifier of the annotation's definition. filename: Name of the file generated by the Labeler. This entry is only present for Labelers that generate an image. values: List of key-value pairs containing annotation data for the current Labeler.

    Each Labeler generates different annotation specifications in the values key-value pair:

    BoundingBox2DLabeler:

    label_id: Integer identifier of a label. label_name: String identifier of a label. instance_id: UUID of one instance of an object. Each object with the same label that is visible on the same capture has different instance_id values. x: Position of the 2D bounding box on the X axis. y: Position of the 2D bounding box position on the Y axis. width: Width of the 2D bounding box. height: Height of the 2D bounding box. BoundingBox3DLabeler:

    label_id: Integer identifier of a label. label_name: String identifier of a label. instance_id: UUID of one instance of an object. Each object with the same label that is visible on the same capture has different instance_id values. translation: 3D vector containing the location of the center of the 3D bounding box with respect to the sensor coordinate system (in meters). size: 3D vector containing the size of the 3D bounding box (in meters) rotation: Quaternion variable containing the orientation of the 3D bounding box. velocity: 3D vector containing the velocity of the 3D bounding box (in meters per second). acceleration: 3D vector containing the acceleration of the 3D bounding box acceleration (in ). KeypointLabeler:

    label_id: Integer identifier of a label. instance_id: UUID of one instance of a joint. Keypoints with the same joint label that are visible on the same capture have different instance_id values. template_id: UUID of the keypoint template. pose: Pose label for that particular capture. keypoints: Array containing the properties of each keypoint. Each keypoint that exists in the keypoint template file is one element of the array. Each entry's contents have as follows:

    index: Index of the keypoint in the keypoint template file. x: Pixel coordinates of the keypoint on the X axis. y: Pixel coordinates of the keypoint on the Y axis. state: State of the keypoint.

    The SemanticSegmentationLabeler does not contain a values list.

    egos.json: Contains collections of key-value pairs for each ego. These include:

    id: UUID of the ego. description: Description of the ego. sensors.json: Contains collections of key-value pairs for all sensors of the simulation. These include:

    id: UUID of the sensor. ego_id: UUID of the ego on which the sensor is attached. modality: Modality of the sensor (e.g., camera, radar, sonar). description: Description of the sensor (e.g., camera, radar).

    Image names The RGB and semantic segmentation images share the same image naming convention. However, the semantic segmentation images also contain the string Semantic_ at the beginning of their filenames. Each RGB image is named "e_h_l_d_r.jpg", where:

    e denotes the id of the environment. h denotes the id of the person. l denotes the id of the lighting condition. d denotes the camera distance at which the image was captured. r denotes the camera angle at which the image was captured.

  20. Data from: SemanticSugarBeets: A Multi-Task Framework and Dataset for...

    • zenodo.org
    zip
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gerardus Croonen; Gerardus Croonen; Andreas Trondl; Andreas Trondl; Julia Simon; Julia Simon; Daniel Steininger; Daniel Steininger (2025). SemanticSugarBeets: A Multi-Task Framework and Dataset for Inspecting Harvest and Storage Characteristics of Sugar Beets [Dataset]. http://doi.org/10.5281/zenodo.15393471
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gerardus Croonen; Gerardus Croonen; Andreas Trondl; Andreas Trondl; Julia Simon; Julia Simon; Daniel Steininger; Daniel Steininger
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    SemanticSugarBeets is a comprehensive dataset and framework designed for analyzing post-harvest and post-storage sugar beets using monocular RGB images. It supports three key tasks: instance segmentation to identify and delineate individual sugar beets, semantic segmentation to classify specific regions of each beet (e.g., damage, soil adhesion, vegetation, and rot) and oriented object detection to estimate the size and mass of beets using reference objects. The dataset includes 952 annotated images with 2,920 sugar-beet instances, captured both before and after storage. Accompanying the dataset is a demo application and processing code, available on GitHub. For more details, refer to the paper presented at the Agriculture-Vision Workshop at CVPR 2025.

    Annotations and Learning Tasks

    The dataset supports three primary learning tasks, each designed to address specific aspects of sugar-beet analysis:

    1. Instance Segmentation
      Detect and delineate entire sugar-beet instances in an image. This task provides coarse-grained annotations for identifying individual beets, which is useful for counting and localization.

    2. Semantic Segmentation
      Perform fine-grained segmentation of each beet instance to classify its regions into specific categories relevant to quality assessment, such as:
      • Beet: healthy, undamaged beet surfaces
      • Cut: areas where the beet has been topped or trimmed
      • Leaf: residual vegetation attached to the beet
      • Soil: soil adhering to the beet's surface
      • Damage: visible damage on the beet
      • Rot: areas affected by rot

    3. Oriented Object Detection
      Detect and estimate the position and orientation of reference objects (folding-ruler elements and plastic signs) within the image. These objects can be used for scale estimation to calculate the absolute size and mass of sugar beets.

    Data Structure and Formats

    The dataset is organized into the following directories:

    • images: contains all RGB images in .jpg format with a resolution of 2120x1192 pixels, which correspond to the annotations in the instances and markers directories

    • instances: annotations and split files used in instance-segmentation experiments:
      • anno: instance contours for a single sugar-beet class in YOLO11 format
      • train/val/test.txt: lists of image IDs for training, validation and testing

    • markers: annotations and split files used in oriented-object-detection experiments:
      • anno: oriented-bounding-box annotations for two classes of markers in YOLO11 format:
        • 0: Ruler (folding-ruler element)
        • 1: Sign (numbered plastic sign)
      • train/val/test.txt: lists of image IDs for training, validation and testing

    • segmentation: annotations, image patches and split files used in semantic-segmentation experiments:
      • anno: single-channel segmentation masks for each individual beet, where pixel values correspond to the following classes:
        • 0: Background
        • 1: Beet
        • 2: Cut
        • 3: Leaf
        • 4: Soil
        • 5: Damage
        • 6: Rot
      • patches: image patches of individual sugar-beet instances cropped from the original images for convenience
      • train/val/test.txt: lists of beet IDs for training, validation, and testing

    File Naming Convention

    File names of images and annotations follow this format:

    ssb-

    • : a 5-digit number (e.g., 00001) identifying the group of recorded sugar beets
    • : either a or b, indicating the same group of beets captured before (a) or after flipping (b)
    • : a 3-digit number (e.g., 001) enumerating individual sugar beets within an image (used only for semantic segmentation)

    Example

    • ssb-00001a: group ID 00001, side a
    • ssb-00001a-001: group ID 00001, side a, beet instance 001

    Citing

    If you use the SemanticSugarBeets dataset or source code in your research, please cite the following paper to acknowledge the authors' contributions:

    Croonen, G., Trondl, A., Simon, J., Steininger, D., 2025. SemanticSugarBeets: A Multi-Task Framework and Dataset for Inspecting Harvest and Storage Characteristics of Sugar Beets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jeff Faudi (2023). Airbus Ships Dataset with Oriented Bounding Boxes [Dataset]. https://www.kaggle.com/datasets/jeffaudi/airbus-ships-annotations-oriented-bounding-boxes
Organization logo

Airbus Ships Dataset with Oriented Bounding Boxes

Explore at:
zip(20217059 bytes)Available download formats
Dataset updated
Apr 2, 2023
Authors
Jeff Faudi
Description

Dataset

This dataset was created by Jeff Faudi

Contents

Search
Clear search
Close search
Google apps
Main menu