28 datasets found
  1. R

    Custom Yolov7 On Kaggle On Custom Dataset

    • universe.roboflow.com
    zip
    Updated Jan 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Owais Ahmad (2023). Custom Yolov7 On Kaggle On Custom Dataset [Dataset]. https://universe.roboflow.com/owais-ahmad/custom-yolov7-on-kaggle-on-custom-dataset-rakiq/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 29, 2023
    Dataset authored and provided by
    Owais Ahmad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Person Car Bounding Boxes
    Description

    Custom Training with YOLOv7 🔥

    Some Important links

    Contact Information

    Objective

    To Showcase custom Object Detection on the Given Dataset to train and Infer the Model using newly launched YoloV7.

    Data Acquisition

    The goal of this task is to train a model that can localize and classify each instance of Person and Car as accurately as possible.

    from IPython.display import Markdown, display
    
    display(Markdown("../input/Car-Person-v2-Roboflow/README.roboflow.txt"))
    

    Custom Training with YOLOv7 🔥

    In this Notebook, I have processed the images with RoboFlow because in COCO formatted dataset was having different dimensions of image and Also data set was not splitted into different Format. To train a custom YOLOv7 model we need to recognize the objects in the dataset. To do so I have taken the following steps:

    • Export the dataset to YOLOv7
    • Train YOLOv7 to recognize the objects in our dataset
    • Evaluate our YOLOv7 model's performance
    • Run test inference to view performance of YOLOv7 model at work

    📦 YOLOv7

    https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/car-person-2.PNG" width=800>

    Image Credit - jinfagang

    Step 1: Install Requirements

    !git clone https://github.com/WongKinYiu/yolov7 # Downloading YOLOv7 repository and installing requirements
    %cd yolov7
    !pip install -qr requirements.txt
    !pip install -q roboflow
    

    Downloading YOLOV7 starting checkpoint

    !wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt"
    
    import os
    import glob
    import wandb
    import torch
    from roboflow import Roboflow
    from kaggle_secrets import UserSecretsClient
    from IPython.display import Image, clear_output, display # to display images
    
    
    
    print(f"Setup complete. Using torch {torch._version_} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
    

    https://camo.githubusercontent.com/dd842f7b0be57140e68b2ab9cb007992acd131c48284eaf6b1aca758bfea358b/68747470733a2f2f692e696d6775722e636f6d2f52557469567a482e706e67">

    I will be integrating W&B for visualizations and logging artifacts and comparisons of different models!

    YOLOv7-Car-Person-Custom

    try:
      user_secrets = UserSecretsClient()
      wandb_api_key = user_secrets.get_secret("wandb_api")
      wandb.login(key=wandb_api_key)
      anonymous = None
    except:
      wandb.login(anonymous='must')
      print('To use your W&B account,
    Go to Add-ons -> Secrets and provide your W&B access token. Use the Label name as WANDB. 
    Get your W&B access token from here: https://wandb.ai/authorize')
      
      
      
    wandb.init(project="YOLOvR",name=f"7. YOLOv7-Car-Person-Custom-Run-7")
    

    Step 2: Assemble Our Dataset

    https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png" alt="">

    In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv7 format.

    In Roboflow, We can choose between two paths:

    Version v2 Aug 12, 2022 Looks like this.

    https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/Roboflow.PNG" alt="">

    user_secrets = UserSecretsClient()
    roboflow_api_key = user_secrets.get_secret("roboflow_api")
    
    rf = Roboflow(api_key=roboflow_api_key)
    project = rf.workspace("owais-ahmad").project("custom-yolov7-on-kaggle-on-custom-dataset-rakiq")
    dataset = project.version(2).download("yolov7")
    

    Step 3: Training Custom pretrained YOLOv7 model

    Here, I am able to pass a number of arguments: - img: define input image size - batch: determine

  2. Cricket Ball Dataset for YOLO

    • kaggle.com
    Updated Apr 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    kushagra3204 (2024). Cricket Ball Dataset for YOLO [Dataset]. https://www.kaggle.com/datasets/kushagra3204/cricket-ball-dataset-for-yolo
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 25, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    kushagra3204
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Cricket Ball Detection for YOLOv8: Train Like a Pro! 🏏

    Sharpen your Cricket AI: Unleash the power of YOLOv8 for precise cricket ball detection in images and videos with this comprehensive dataset.

    Fuel Your Custom Training: Build a robust cricket ball detection model tailored to your specific needs. This dataset, featuring 1778 meticulously annotated images in YOLOv8 format, serves as the perfect launchpad.

    Dive into Diversity:

    In-Action Balls: Train your model to identify cricket balls in motion, capturing deliveries, fielding plays, and various gameplay scenarios.

    Lighting Variations: Adapt to diverse lighting conditions (day, night, indoor) with a range of images showcasing balls under different illumination.

    Background Complexity: Prepare your model for real-world environments. The dataset includes images featuring stadiums, practice nets, and various background clutter.

    Ball States: Train effectively with images of new and used cricket balls, encompassing varying degrees of wear and tear.

    Unlock Potential Applications:

    Real-time Cricket Analysis: Power applications for in-depth player analysis, ball trajectory tracking, and automated umpiring systems.

    Enhanced Broadcasting Experiences: Integrate seamless ball tracking, on-screen overlays, and real-time highlights into cricket broadcasts.

    Automated Summarization: Streamline cricket video processing for automated highlight reels, focusing on key ball-related moments.

    Who Should Use This Dataset:

    • Computer vision researchers and developers seeking to leverage YOLOv8 for object detection in sports applications.
    • Cricket enthusiasts and data scientists passionate about building AI-powered cricket analytics tools.
    • Anyone venturing into custom object detection models for cricket analysis or sports technology projects.
  3. R

    Accident Detection Model Dataset

    • universe.roboflow.com
    zip
    Updated Apr 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Accident detection model (2024). Accident Detection Model Dataset [Dataset]. https://universe.roboflow.com/accident-detection-model/accident-detection-model/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 8, 2024
    Dataset authored and provided by
    Accident detection model
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Accident Bounding Boxes
    Description

    Accident-Detection-Model

    Accident Detection Model is made using YOLOv8, Google Collab, Python, Roboflow, Deep Learning, OpenCV, Machine Learning, Artificial Intelligence. It can detect an accident on any accident by live camera, image or video provided. This model is trained on a dataset of 3200+ images, These images were annotated on roboflow.

    Problem Statement

    • Road accidents are a major problem in India, with thousands of people losing their lives and many more suffering serious injuries every year.
    • According to the Ministry of Road Transport and Highways, India witnessed around 4.5 lakh road accidents in 2019, which resulted in the deaths of more than 1.5 lakh people.
    • The age range that is most severely hit by road accidents is 18 to 45 years old, which accounts for almost 67 percent of all accidental deaths.

    Accidents survey

    https://user-images.githubusercontent.com/78155393/233774342-287492bb-26c1-4acf-bc2c-9462e97a03ca.png" alt="Survey">

    Literature Survey

    • Sreyan Ghosh in Mar-2019, The goal is to develop a system using deep learning convolutional neural network that has been trained to identify video frames as accident or non-accident.
    • Deeksha Gour Sep-2019, uses computer vision technology, neural networks, deep learning, and various approaches and algorithms to detect objects.

    Research Gap

    • Lack of real-world data - We trained model for more then 3200 images.
    • Large interpretability time and space needed - Using google collab to reduce interpretability time and space required.
    • Outdated Versions of previous works - We aer using Latest version of Yolo v8.

    Proposed methodology

    • We are using Yolov8 to train our custom dataset which has been 3200+ images, collected from different platforms.
    • This model after training with 25 iterations and is ready to detect an accident with a significant probability.

    Model Set-up

    Preparing Custom dataset

    • We have collected 1200+ images from different sources like YouTube, Google images, Kaggle.com etc.
    • Then we annotated all of them individually on a tool called roboflow.
    • During Annotation we marked the images with no accident as NULL and we drew a box on the site of accident on the images having an accident
    • Then we divided the data set into train, val, test in the ratio of 8:1:1
    • At the final step we downloaded the dataset in yolov8 format.
      #### Using Google Collab
    • We are using google colaboratory to code this model because google collab uses gpu which is faster than local environments.
    • You can use Jupyter notebooks, which let you blend code, text, and visualisations in a single document, to write and run Python code using Google Colab.
    • Users can run individual code cells in Jupyter Notebooks and quickly view the results, which is helpful for experimenting and debugging. Additionally, they enable the development of visualisations that make use of well-known frameworks like Matplotlib, Seaborn, and Plotly.
    • In Google collab, First of all we Changed runtime from TPU to GPU.
    • We cross checked it by running command ‘!nvidia-smi’
      #### Coding
    • First of all, We installed Yolov8 by the command ‘!pip install ultralytics==8.0.20’
    • Further we checked about Yolov8 by the command ‘from ultralytics import YOLO from IPython.display import display, Image’
    • Then we connected and mounted our google drive account by the code ‘from google.colab import drive drive.mount('/content/drive')’
    • Then we ran our main command to run the training process ‘%cd /content/drive/MyDrive/Accident Detection model !yolo task=detect mode=train model=yolov8s.pt data= data.yaml epochs=1 imgsz=640 plots=True’
    • After the training we ran command to test and validate our model ‘!yolo task=detect mode=val model=runs/detect/train/weights/best.pt data=data.yaml’ ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt conf=0.25 source=data/test/images’
    • Further to get result from any video or image we ran this command ‘!yolo task=detect mode=predict model=runs/detect/train/weights/best.pt source="/content/drive/MyDrive/Accident-Detection-model/data/testing1.jpg/mp4"’
    • The results are stored in the runs/detect/predict folder.
      Hence our model is trained, validated and tested to be able to detect accidents on any video or image.

    Challenges I ran into

    I majorly ran into 3 problems while making this model

    • I got difficulty while saving the results in a folder, as yolov8 is latest version so it is still underdevelopment. so i then read some blogs, referred to stackoverflow then i got to know that we need to writ an extra command in new v8 that ''save=true'' This made me save my results in a folder.
    • I was facing problem on cvat website because i was not sure what
  4. R

    Uno Cards Dataset

    • universe.roboflow.com
    zip
    Updated Jul 24, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joseph Nelson (2022). Uno Cards Dataset [Dataset]. https://universe.roboflow.com/joseph-nelson/uno-cards/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 24, 2022
    Dataset authored and provided by
    Joseph Nelson
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Card Types Bounding Boxes
    Description

    Overview

    This dataset contains 8,992 images of Uno cards and 26,976 labeled examples on various textured backgrounds.

    This dataset was collected, processed, and released by Roboflow user Adam Crawshaw, released with a modified MIT license: https://firstdonoharm.dev/

    https://i.imgur.com/P8jIKjb.jpg" alt="Image example">

    Use Cases

    Adam used this dataset to create an auto-scoring Uno application:

    Getting Started

    Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more.

    Annotation Guide

    See here for how to use the CVAT annotation tool.

    About Roboflow

    Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

    Roboflow Wordmark

  5. R

    Wd Dataset

    • universe.roboflow.com
    zip
    Updated Mar 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Train YOLOV8 on custom dataset (2024). Wd Dataset [Dataset]. https://universe.roboflow.com/train-yolov8-on-custom-dataset/wd-8xrcf/dataset/5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 3, 2024
    Dataset authored and provided by
    Train YOLOV8 on custom dataset
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Weapons Bounding Boxes
    Description

    WD

    ## Overview
    
    WD is a dataset for object detection tasks - it contains Weapons annotations for 2,451 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. Traffic Signs Dataset in YOLO format

    • kaggle.com
    Updated Apr 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Valentyn Sichkar (2020). Traffic Signs Dataset in YOLO format [Dataset]. https://www.kaggle.com/valentynsichkar/traffic-signs-dataset-in-yolo-format/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 3, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Valentyn Sichkar
    Description

    🎓 Related Course for Detection Tasks

    Training YOLO v3 for Objects Detection with Custom Data. Build your own detector by labelling, training and testing on image, video and in real time with camera. Join here: https://www.udemy.com/course/training-yolo-v3-for-objects-detection-with-custom-data/

    🎥 Trained Results

    Detections on video and image are shown below. Trained weights can be found in the course mentioned above.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3400968%2Fbcdae0b57021d6ac3e86a9aa2e8c4b08%2Fts_detections.gif?generation=1581700736851192&alt=media" alt="Video" title="Detections of Traffic Signs on Video">

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3400968%2F20ca377934c5ed5f8c1e4272c838b01a%2Fts_detections.jpg?generation=1581701085754638&alt=media" alt="Image" title="Detections of Traffic Signs on Image">

    🚩 Concept Map of the Course

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3400968%2Fc51fc6aba2c0cd6d22512f486880868a%2FConcept_map_YOLO_3.png?generation=1584694252456677&alt=media" alt="Concept Map of the Course" title="Concept Map of the Course YOLO v3">

    👉 Join the Course

    https://www.udemy.com/course/training-yolo-v3-for-objects-detection-with-custom-data/

    Related Dataset for Classification Tasks

    Explore one more dataset used for classification tasks here: https://www.kaggle.com/valentynsichkar/traffic-signs-preprocessed

    About this Dataset for Detection Tasks

    This is ready to use Traffic Signs Dataset in YOLO format for Detection tasks. It can be used for training as well as for testing. Dataset consists of images in *.jpg format and *.txt files next to every image that have the same names as images files have. These *.txt files include annotations of bounding boxes of Traffic Sings in the YOLO format:
    [Class Number] [center in x] [center in y] [Width] [Height]

    For example, file 00001.txt includes three bounding boxes (each in a new line) that describe three Traffic Signs in 00001.jpg image:
    2 0.7378676470588236 0.5125 0.030147058823529412 0.055
    2 0.3044117647058823 0.65375 0.041176470588235294 0.0725
    3 0.736764705882353 0.453125 0.04264705882352941 0.06875

    Traffic Sins in this Dataset are grouped into four categories:
    prohibitory
    danger
    mandatory
    other

    Prohibitory category consists of following Traffic Signs: speed limit, no overtaking, no traffic both ways, no trucks.
    Danger category consists of following Traffic Sings: priority at next intersection, danger, bend left, bend right, bend, uneven road, slippery road, road narrows, construction, traffic signal, pedestrian crossing, school crossing, cycles crossing, snow, animals.
    Mandatory category consists of following Traffic Sings: go right, go left, go straight, go right or straight, go left or straight, keep right, keep left, roundabout.
    Other category consists of following Traffic Sings: restriction ends, priority road, give way, stop, no entry.

    Dataset itself is in zip archive and organization is as following:
    +--ts/
    | 00000.txt
    | 00000.jpg
    | 00001.txt
    | 00001.jpg
    | ...

    To train in Darknet framework, dataset is accompanied with following files:
    ts_data.data
    classes.names
    train.txt
    test.txt
    yolov3_ts_test.cfg
    yolov3_ts_train.cfg

    Pay attention! You need to specify yours full paths before use these additional files. Find more details in the course mentioned above. Also, you can use especially designed Python file to get full path getting-full-path.py.

    mAP Results

    Here is testing results after training in Darknet framework on this dataset. Total number of iterations are 8000. For training used 631 images and for validation during training used 112 images.

    mAP (mean average precision), calculated on test images, is as following:
    total number detections = 270
    prohibitory, ap (average precision) = 97.22% (TP = 78, FP = 2)
    danger, ap (average precision) = 100.00% (TP = 24, FP = 0)
    mandatory, ap (average precision) = 94.50% (TP = 23, FP = 2)
    other, ap (average precision) = 97.10% (TP = 39, FP = 1)

    mAP = 97.21 %

    Acknowledgements

    Initial data is German Traffic Sign Detection Benchmark (GTDRB).

  7. f

    ROAD OBSTACLES.zip Road Obstacles for Training DL Models

    • figshare.com
    zip
    Updated Nov 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    pison mutabarura; Nicasio Maguu Muchuka; Davies Rene Segera (2024). ROAD OBSTACLES.zip Road Obstacles for Training DL Models [Dataset]. http://doi.org/10.6084/m9.figshare.27909219.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 26, 2024
    Dataset provided by
    figshare
    Authors
    pison mutabarura; Nicasio Maguu Muchuka; Davies Rene Segera
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Augmented custom dataset with images sourced from online sources and camera captures. The dataset was used to train YOLO models for road obstacle detection on African roads specificallly.

  8. Supplementary Materials for Learning Manufacturing Computer Vision Systems...

    • zenodo.org
    bin, text/x-python +1
    Updated May 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adan Medina; Russel Bradley; Wenhao Xu; PEDRO PONCE; Brian Anthony; Arturo Molina; Adan Medina; Russel Bradley; Wenhao Xu; PEDRO PONCE; Brian Anthony; Arturo Molina (2024). Supplementary Materials for Learning Manufacturing Computer Vision Systems Using Tiny YOLOv4 [Dataset]. http://doi.org/10.5281/zenodo.11204799
    Explore at:
    text/x-python, zip, binAvailable download formats
    Dataset updated
    May 21, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Adan Medina; Russel Bradley; Wenhao Xu; PEDRO PONCE; Brian Anthony; Arturo Molina; Adan Medina; Russel Bradley; Wenhao Xu; PEDRO PONCE; Brian Anthony; Arturo Molina
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    About This Dataset

    This repository contains the supplementary materials presented in the publication “Learning Manufacturing Computer Vision Systems Using Tiny YOLO v4” by Medina, A., Bradley, R., Xu, W., Ponce, P., Anthony, B., and Molina, A. that can be found with the following DOI 10.3389/frobt.2024.1331249

    There are three files in this repository:

    1. dataset.zip
    2. YOLOv4_object_detection.ipynb
    3. deploy.py

    dataset.zip

    This Dataset is for an example used for education purposes. It is a small dataset that is adapted from the following Kaggle repository, authored by Ruthger Righart https://www.kaggle.com/datasets/rrighart/jarlids/data. One of the activities proposed is to teach students how to find, download and review a free dataset, so this is the example given.

    Another activity is to teach how to label images to create a custom dataset. The images (with extension .JPG) from the original repository are used. The labels (with extension .txt) were created by the authors of the Learning Manufacturing Computer Vision Systems Using Tiny YOLOv4 paper. The authors used the free tool labelImg, from GitHub repository (https://github.com/HumanSignal/labelImg), to label the images with object bounding boxes and corresponding labels in the YOLO format.

    The dataset contains 238 images and corresponding labels, with files named “p

    YOLOv4_object_detection.ipynb

    This notebook was created to give the user a step-by-step tutorial on how to train a YOLOv4 algorithm with a custom dataset using a free GPU on Google Collab, the prerequisite to use it are:

    • To have ready the dataset.
    • Have the training txt file with the path to all images used for training.
    • Have the test txt file with the path to all images used for testing.

    There are other requirements like cloning a GitHub repository and altering certain files on that repository; however, those steps are discussed within the notebook.

    At the end of the notebook an example on how to test the trained model with images and/or videos is shown, however since Google Collab doesn’t have access to the physical computer of the user live stream video is not part of the example.

    deploy.py

    Disclaimer: This code is not optimized, and its intended purpose is to teach students how to run YOLO on a raspberry pi using the OpenCV library.

    To use this code with different files or datasets, be sure to change the two parameters inside the net3 variable which are the cfg file used while training the algorithm and the weights file. You should also change the class list to include your classes, keeping in mind that the classes order must correspond to the order of the labeling process and class 0 is the first one on the list.

    Also to change the Title of the created image prompt you shout go to the line calling the imshow method and change the ‘Tiny YOLOv4’ string.

    This algorithm uses the first camera it finds and opens up a display image with the detected objects surrounded by a bounding box, on top of that box the top predicted class is going to show, to change color of bounding boxes or text change the rectangle method where it says GREEN as well as in the next code line ant change the number to change the thickness of the line.

    This code has a hardcoded confidence threshold for both the YOLO objectevness score and the class score, this can be found in the NMSBoxes method and the if confidence line accordingly. The main value to change first is the if confidence value.

    To close the image, you need to press the key ‘q’ as closing the display window is not going to work as it will reopen again.

    Note: This code allows the pop-up window, which displays the detections, to be closed only when the "q" key is pressed. Simply closing the window will not work.

  9. Z

    Multi-Altitude Aerial Vehicles Dataset

    • data.niaid.nih.gov
    Updated Apr 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panayiotis Kolios (2023). Multi-Altitude Aerial Vehicles Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7736335
    Explore at:
    Dataset updated
    Apr 5, 2023
    Dataset provided by
    Panayiotis Kolios
    Rafael Makrigiorgis
    Christos Kyrkou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Custom Multi-Altitude Aerial Vehicles Dataset:

    Created for publishing results for ICUAS 2023 paper "How High can you Detect? Improved accuracy and efficiency at varying altitudes for Aerial Vehicle Detection", following the abstract of the paper.

    Abstract—Object detection in aerial images is a challenging task mainly because of two factors, the objects of interest being really small, e.g. people or vehicles, making them indistinguishable from the background; and the features of objects being quite different at various altitudes. Especially, when utilizing Unmanned Aerial Vehicles (UAVs) to capture footage, the need for increased altitude to capture a larger field of view is quite high. In this paper, we investigate how to find the best solution for detecting vehicles in various altitudes, while utilizing a single CNN model. The conditions for choosing the best solution are the following; higher accuracy for most of the altitudes and real-time processing ( > 20 Frames per second (FPS) ) on an Nvidia Jetson Xavier NX embedded device. We collected footage of moving vehicles from altitudes of 50-500 meters with a 50-meter interval, including a roundabout and rooftop objects as noise for high altitude challenges. Then, a YoloV7 model was trained on each dataset of each altitude along with a dataset including all the images from all the altitudes. Finally, by conducting several training and evaluation experiments and image resizes we have chosen the best method of training objects on multiple altitudes to be the mixup dataset with all the altitudes, trained on a higher image size resolution, and then performing the detection using a smaller image resize to reduce the inference performance. The main results

    The creation of a custom dataset was necessary for altitude evaluation as no other datasets were available. To fulfill the requirements, the footage was captured using a small UAV hovering above a roundabout near the University of Cyprus campus, where several structures and buildings with solar panels and water tanks were visible at varying altitudes. The data were captured during a sunny day, ensuring bright and shadowless images. Images were extracted from the footage, and all data were annotated with a single class labeled as 'Car'. The dataset covered altitudes ranging from 50 to 500 meters with a 50-meter step, and all images were kept at their original high resolution of 3840x2160, presenting challenges for object detection. The data were split into 3 sets for training, validation, and testing, with the number of vehicles increasing as altitude increased, which was expected due to the larger field of view of the camera. Each folder consists of an aerial vehicle dataset captured at the corresponding altitude. For each altitude, the dataset annotations are generated in YOLO, COCO, and VOC formats. The dataset consists of the following images and detection objects:

        Data
        Subset
        Images
        Cars
    
    
        50m
        Train
        130
        269
    
    
        50m
        Test
        32
        66
    
    
        50m
        Valid
        33
        73
    
    
        100m
        Train
        246
        937
    
    
        100m
        Test
        61
        226
    
    
        100m
        Valid
        62
        250
    
    
        150m
        Train
        244
        1691
    
    
        150m
        Test
        61
        453
    
    
        150m
        Valid
        61
        426
    
    
        200m
        Train
        246
        1753
    
    
        200m
        Test
        61
        445
    
    
        200m
        Valid
        62
        424
    
    
        250m
        Train
        245
        3326
    
    
        250m
        Test
        61
        821
    
    
        250m
        Valid
        61
        823
    
    
        300m
        Train
        246
        6250
    
    
        300m
        Test
        61
        1553
    
    
        300m
        Valid
        62
        1585
    
    
        350m
        Train
        246
        10741
    
    
        350m
        Test
        61
        2591
    
    
        350m
        Valid
        62
        2687
    
    
        400m
        Train
        245
        20072
    
    
        400m
        Test
        61
        4974
    
    
        400m
        Valid
        61
        4924
    
    
        450m
        Train
        246
        31794
    
    
        450m
        Test
        61
        7887
    
    
        450m
        Valid
        61
        7880
    
    
        500m
        Train
        270
        49782
    
    
        500m
        Test
        67
        12426
    
    
        500m
        Valid
        68
        12541
    
    
        mix_alt
        Train
        2364
        126615
    
    
        mix_alt
        Test
        587
        31442
    
    
        mix_alt
        Valid
        593
        31613
    

    It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).

  10. R

    Aerial Maritime Drone Object Detection Dataset - tiled

    • public.roboflow.com
    zip
    Updated Sep 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob Solawetz (2022). Aerial Maritime Drone Object Detection Dataset - tiled [Dataset]. https://public.roboflow.com/object-detection/aerial-maritime/9
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 28, 2022
    Dataset authored and provided by
    Jacob Solawetz
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Bounding Boxes of movable-objects
    Description

    Overview

    Drone Example

    This dataset contains 74 images of aerial maritime photographs taken with via a Mavic Air 2 drone and 1,151 bounding boxes, consisting of docks, boats, lifts, jetskis, and cars. This is a multi class problem. This is an aerial object detection dataset. This is a maritime object detection dataset.

    The drone was flown at 400 ft. No drones were harmed in the making of this dataset.

    This dataset was collected and annotated by the Roboflow team, released with MIT license.

    https://i.imgur.com/9ZYLQSO.jpg" alt="Image example">

    Use Cases

    • Identify number of boats on the water over a lake via quadcopter.
    • Boat object detection dataset
    • Aerial Object Detection proof of concept
    • Identify if boat lifts have been taken out via a drone
    • Identify cars with a UAV drone
    • Find which lakes are inhabited and to which degree.
    • Identify if visitors are visiting the lake house via quad copter.
    • Proof of concept for UAV imagery project
    • Proof of concept for maritime project
    • Etc.

    This dataset is a great starter dataset for building an aerial object detection model with your drone.

    Getting Started

    Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more. Stay tuned for particular tutorials on how to teach your UAV drone how to see and comprable airplane imagery and airplane footage.

    Annotation Guide

    See here for how to use the CVAT annotation tool that was used to create this dataset.

    About Roboflow

    Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

    Roboflow Wordmark

  11. Z

    Pre-processed (in Detectron2 and YOLO format) planetary images and boulder...

    • data.niaid.nih.gov
    Updated Nov 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amaro, Brian (2024). Pre-processed (in Detectron2 and YOLO format) planetary images and boulder labels collected during the BOULDERING Marie Skłodowska-Curie Global fellowship [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14250873
    Explore at:
    Dataset updated
    Nov 30, 2024
    Dataset provided by
    Prieur, Nils
    Lapotre, Mathieu
    Amaro, Brian
    Gonzalez, Emiliano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This database contains 4976 planetary images of boulder fields located on Earth, Mars and Moon. The data was collected during the BOULDERING Marie Skłodowska-Curie Global fellowship between October 2021 and 2024. The data was already splitted into train, validation and test datasets, but feel free to re-organize the labels at your convenience.

    For each image, all of the boulder outlines within the image were carefully mapped in QGIS. More information about the labelling procedure can be found in the following manuscript (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023JE008013). This dataset differs from the previous dataset included along with the manuscript https://zenodo.org/records/8171052, as it contains more mapped images, especially of boulder populations around young impact structures on the Moon (cold spots). In addition, the boulder outlines were also pre-processed so that it can be ingested directly in YOLOv8.

    A description of what is what is given in the README.txt file (in addition in how to load the custom datasets in Detectron2 and YOLO). Most of the other files are mostly self-explanatory. Please see previous dataset or manuscript for more information. If you want to have more information about specific lunar and martian planetary images, the IDs of the images are still available in the name of the file. Use this ID to find more information (e.g., M121118602_00875_image.png, ID M121118602 ca be used on https://pilot.wr.usgs.gov/). I will also upload the raw data from which this pre-processed dataset was generated (see https://zenodo.org/records/14250970).

    Thanks to this database, you can easily train a Detectron2 Mask R-CNN or YOLO instance segmentation models to automatically detect boulders.

    How to cite:

    Please refer to the "how to cite" section of the readme file of https://github.com/astroNils/YOLOv8-BeyondEarth.

    Structure:

    . └── boulder2024/ ├── jupyter-notebooks/ │ └── REGISTERING_BOULDER_DATASET_IN_DETECTRON2.ipynb ├── test/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── train/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── validation/ │ └── images/ │ ├── _image.png │ ├── ... │ └── labels/ │ ├── _image.txt │ ├── ... ├── detectron2_inst_seg_boulder_dataset.json ├── README.txt ├── yolo_inst_seg_boulder_dataset.yaml

    detectron2_inst_seg_boulder_dataset.json

    is a json file containing the masks as expected by Detectron2 (see https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html for more information on the format). In order to use this custom dataset, you need to register the dataset before using it in the training. There is an example how to do that in the jupyter-notebooks folder. You need to have detectron2, and all of its depedencies installed.

    yolo_inst_seg_boulder_dataset.yaml

    can be used as it is, however you need to update the paths in the .yaml file, to the test, train and validation folders. More information about the YOLO format can be found here (https://docs.ultralytics.com/datasets/segment/).

  12. YOLOv5 Drone Detection Using Multimodal Data Registered by the Vicon System

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Feb 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wojciech Lindenheim-Locher; Adam Świtoński; Adam Świtoński; Tomasz Krzeszowski; Tomasz Krzeszowski; Grzegorz Paleta; Piotr Hasiec; Henryk Josiński; Henryk Josiński; Marcin Paszkuta; Marcin Paszkuta; Konrad Wojciechowski; Konrad Wojciechowski; Jakub Rosner; Jakub Rosner; Wojciech Lindenheim-Locher; Grzegorz Paleta; Piotr Hasiec (2025). YOLOv5 Drone Detection Using Multimodal Data Registered by the Vicon System [Dataset]. http://doi.org/10.3390/s23146396
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 16, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Wojciech Lindenheim-Locher; Adam Świtoński; Adam Świtoński; Tomasz Krzeszowski; Tomasz Krzeszowski; Grzegorz Paleta; Piotr Hasiec; Henryk Josiński; Henryk Josiński; Marcin Paszkuta; Marcin Paszkuta; Konrad Wojciechowski; Konrad Wojciechowski; Jakub Rosner; Jakub Rosner; Wojciech Lindenheim-Locher; Grzegorz Paleta; Piotr Hasiec
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Dataset is available under the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.

    =======================
    Summary
    =======================
    This dataset contains data showing a group of one or more drones (drone swarm) roaming around closed 3D space.
    The purpose of this dataset is to train and validate the position tracking of drones on images. This can be done using 3D coordinates of each drone or using segmentation masks for machine learning related approach.
    The first part of the dataset is split across ten sequences (AirSim 1-10) which were generated using Unreal Engine with Microsoft AirSim plugin. Each sequence is slightly modified to introduce more diversity in the dataset.
    The second part of the dataset (HML 1-3) is based on recordings performed in the Human Motion Laboratory located inside the Research and Development Center of the Polish-Japanese Academy of Information Technology.
    The simulation sequences used a drone model DJI Mavic 2 Pro by MattMaksymowicz, CC Attribution, https://sketchfab.com/3d-models/dji-mavic-2-pro-3e5b8566dbe24f4ba65473179650abd1.

    All documents and papers that use the dataset must acknowledge the use of the dataset by including a citation of the following paper:
    Lindenheim-Locher W, Świtoński A, Krzeszowski T, Paleta G, Hasiec P, Josiński H, Paszkuta M, Wojciechowski K, Rosner J. YOLOv5 Drone Detection Using Multimodal Data Registered by the Vicon System. Sensors. 2023; 23(14):6396. https://doi.org/10.3390/s23146396

    =======================
    Dataset sequences
    =======================
    AirSim 1:
    4 drones based on DJI Mavic 2 Pro.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 2:
    10 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.
    Length: 10 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 3:
    10 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.
    Length: 10 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 4:
    8 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 5:
    6 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 6:
    8 drones based on custom model. All drones move the same path however, each one has some fixed offset applied in all axes.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 7:
    8 drones based on three types of custom model.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 8:
    8 drones based on three types of custom model.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 9:
    8 drones based on three types of custom model.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    AirSim 10:
    2 drones based on a custom model.
    Length: 20 seconds
    Framerate: 25 FPS
    Number of cameras: 8

    HML 1:
    1 real drone (DJI Mavic 2). It is controlled in real-time.
    Length: 78 seconds
    Framerate: 25 FPS
    Number of cameras: 4

    HML 2:
    1 real drone. It is controlled in real-time.
    Length: 60 seconds
    Framerate: 25 FPS
    Number of cameras: 4

    HML 3:
    1 real drone. It is controlled in real-time.
    Length: 90 seconds
    Framerate: 25 FPS
    Number of cameras: 4

    =======================
    Additional information
    =======================

    AirSim 1-10 sets consist of two types of image data: RGB images and masks

    RGB images are compressed to avi videos to save space. The name of the AVI filename contains the camera name.
    The name of the mask directory is based on the drone whose masks it contains
    Naming pattern of mask files:
    - a camera that was used to capture
    - frame ID
    for example, cam_1_230.jpeg was taken by camera 1 and is 230 frames of the sequence.

    HML 1-3 sets consist of two additional files:
    - c3d motion data file
    - xcp calibration file

    =======================
    Further information
    =======================
    For any questions, comments or other issues please contact Tomasz Krzeszowski

  13. potholes, cracks and openmanholes (Road Hazards)

    • kaggle.com
    Updated Feb 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sabid Rahman (2025). potholes, cracks and openmanholes (Road Hazards) [Dataset]. http://doi.org/10.34740/kaggle/dsv/10834063
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 23, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Sabid Rahman
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23345571%2F4471e4ade50676d782d4787f77aa08ad%2F1000_F_256252609_6WIHRGbpzSaVQwioubxwgXdSJTNONNcK.jpg?generation=1739209341333909&alt=media" alt="">

    This dataset contains 2,700 images focused on detecting potholes, cracks, and open manholes on roads. It has been augmented to enhance the variety and robustness of the data. The images are organized into training and validation sets, with three distinct categories:

    • Potholes: class 0
    • Cracks: class 1
    • Open Manholes: class 2

    Included in the Dataset: - Bounding Box Annotations in YOLO Format (.txt files) - Format: YOLOv8 & YOLO11 compatible - Purpose: Ready for training YOLO-based object detection models

    • Folder Structure Organized into:

      • train/ folder
      • valid/ folder
      • Class-specific folders
      • An all_classes/ folder for combined access Benefit: Easy access for training, validation, and augmentation tasks
    • Dual Format Support

      • COCO JSON Annotations Included -Compatible with models like Faster R-CNN Enables flexibility across different object detection frameworks
    • Use Cases Targeted

      • Model training
      • Model testing
      • Custom data augmentation
      • Specific focus: Road safety and infrastructure detection

    Here's a clear breakdown of the folder structure:

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F23345571%2F023b40c98bf858c58394d6ed2393bfc3%2FScreenshot%202025-05-01%20202438.png?generation=1746109541780835&alt=media" alt="">

  14. R

    X 00ver0.001 Dataset

    • universe.roboflow.com
    zip
    Updated Aug 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    abdalhay-abass (2024). X 00ver0.001 Dataset [Dataset]. https://universe.roboflow.com/abdalhay-abass/x-00ver0.001/model/26
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 16, 2024
    Dataset authored and provided by
    abdalhay-abass
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Car Bounding Boxes
    Description

    test dataset for train model on custom dataset, test

  15. R

    Custom_object_train Dataset

    • universe.roboflow.com
    zip
    Updated Mar 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    new-workspace-vnttx (2022). Custom_object_train Dataset [Dataset]. https://universe.roboflow.com/new-workspace-vnttx/custom_object_train
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 11, 2022
    Dataset authored and provided by
    new-workspace-vnttx
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Ball Bounding Boxes
    Description

    Custom_Object_Train

    ## Overview
    
    Custom_Object_Train is a dataset for object detection tasks - it contains Ball annotations for 1,475 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  16. m

    Data from: Lightweight target detection for large-field ddPCR images based...

    • data.mendeley.com
    Updated Dec 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    星宇 靳 (2024). Lightweight target detection for large-field ddPCR images based on improved YOLOv5 [Dataset]. http://doi.org/10.17632/f6rjrn2w7g.3
    Explore at:
    Dataset updated
    Dec 30, 2024
    Authors
    星宇 靳
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset and code used in this study are crucial for advancing the accurate detection of positive microchambers in large-field ddPCR imaging. The provided dataset includes annotated ddPCR images in YOLO format, stored in the ddpcr320/ folder. The codebase features the improved YOLOv5 model, integrating BiFPN, GhostConv, C3Ghost modules, SimAM attention mechanism, and network pruning, among other custom modifications. The train.py and detect.py scripts handle training and detection tasks, while dataset.ipynb demonstrates the dataset creation and splitting processes, as well as dataset processing and augmentation. The graphical user interface, developed using PyQt5 and implemented in main_win.py, facilitates image processing and result analysis for users. The project structure, ddpcr_yolov5, is systematically organized, with detailed instructions provided in the README.md file.

  17. R

    Ood Dataset

    • universe.roboflow.com
    zip
    Updated Dec 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FPN (2023). Ood Dataset [Dataset]. https://universe.roboflow.com/fpn/ood-pbnro/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 5, 2023
    Dataset authored and provided by
    FPN
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    Outdoor Obstacle Detection (OOD): is a custom dataset created to train models to detect 22 specific types of obstacles that can obstruct blind people in the way when walking in outdoor spaces. The dataset contains 10.000 images and 29.779 annotated instances and 22 classes: person, car, tree, spherical_roadblock, warning_column, waste_container, street_light, fire_hydrant, traffic_light, stop_sign, pole, bench, curb, stairs, bicycle, motorcycle, dog, bus, truck, train, bus_stop, crutch.

  18. R

    My Project Dataset

    • universe.roboflow.com
    zip
    Updated Dec 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas more university of applied sciences (2023). My Project Dataset [Dataset]. https://universe.roboflow.com/thomas-more-university-of-applied-sciences-fsaz3/my-project-bt79l
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 3, 2023
    Dataset authored and provided by
    Thomas more university of applied sciences
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Hornet Bees Bounding Boxes
    Description

    Goals 1- Create dataset and apply augmentations. 2- Train custom YOLO model. 3- Evaluate using still frames. 4- Evaluate usinf video.

  19. Devnet Custom Mv Ppe Detection Dataset

    • universe.roboflow.com
    zip
    Updated Apr 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cisco Systems (2025). Devnet Custom Mv Ppe Detection Dataset [Dataset]. https://universe.roboflow.com/cisco-systems-c21fi/devnet-custom-mv-ppe-detection/model/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 3, 2025
    Dataset provided by
    Ciscohttp://cisco.com/
    Authors
    Cisco Systems
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    PPE Bounding Boxes
    Description

    DevNet Custom MV PPE Detection

    ## Overview
    
    DevNet Custom MV PPE Detection is a dataset for object detection tasks - it contains PPE annotations for 5,239 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
    
  20. R

    Football Player Detection Dataset

    • universe.roboflow.com
    zip
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Augmented Startups (2024). Football Player Detection Dataset [Dataset]. https://universe.roboflow.com/augmented-startups/football-player-detection-kucab/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 11, 2024
    Dataset authored and provided by
    Augmented Startups
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Track Players And Football Bounding Boxes
    Description

    Overview:

    Football (soccer) player and football (soccer) ball detection dataset from Augmented Startups. * Project Type: Object Detection * Labeled/Annotated with: Bounding boxes

    Classes:

    • football, player

    How to Use:

    This is a great starter-dataset for those wanting to test player and/or ball-tracking for football (soccer) games with the Deploy Tab, or the Deployment device and method of their choice.

    Images can also be Cloned to another project to continue iterating on the project and model. World Cup, Premier League, La Liga, Major League Soccer (MLS) and/or Champions League computer vision projects, anyone?

    Roboflow offers AutoML model training - Roboflow Train, and the ability to import and export up to 30 different annotation formats. Leaving you flexibility to deploy directly with a Roboflow Train model, or use Roboflow to prepare and manage datasets, and train and deploy with the custom model architecture of your choice + https://github.com/roboflow-ai/notebooks.

    Tips for Model and Dataset Improvement:

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Owais Ahmad (2023). Custom Yolov7 On Kaggle On Custom Dataset [Dataset]. https://universe.roboflow.com/owais-ahmad/custom-yolov7-on-kaggle-on-custom-dataset-rakiq/dataset/2

Custom Yolov7 On Kaggle On Custom Dataset

custom-yolov7-on-kaggle-on-custom-dataset

custom-yolov7-on-kaggle-on-custom-dataset-rakiq

Explore at:
zipAvailable download formats
Dataset updated
Jan 29, 2023
Dataset authored and provided by
Owais Ahmad
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Person Car Bounding Boxes
Description

Custom Training with YOLOv7 🔥

Some Important links

Contact Information

Objective

To Showcase custom Object Detection on the Given Dataset to train and Infer the Model using newly launched YoloV7.

Data Acquisition

The goal of this task is to train a model that can localize and classify each instance of Person and Car as accurately as possible.

from IPython.display import Markdown, display

display(Markdown("../input/Car-Person-v2-Roboflow/README.roboflow.txt"))

Custom Training with YOLOv7 🔥

In this Notebook, I have processed the images with RoboFlow because in COCO formatted dataset was having different dimensions of image and Also data set was not splitted into different Format. To train a custom YOLOv7 model we need to recognize the objects in the dataset. To do so I have taken the following steps:

  • Export the dataset to YOLOv7
  • Train YOLOv7 to recognize the objects in our dataset
  • Evaluate our YOLOv7 model's performance
  • Run test inference to view performance of YOLOv7 model at work

📦 YOLOv7

https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/car-person-2.PNG" width=800>

Image Credit - jinfagang

Step 1: Install Requirements

!git clone https://github.com/WongKinYiu/yolov7 # Downloading YOLOv7 repository and installing requirements
%cd yolov7
!pip install -qr requirements.txt
!pip install -q roboflow

Downloading YOLOV7 starting checkpoint

!wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt"
import os
import glob
import wandb
import torch
from roboflow import Roboflow
from kaggle_secrets import UserSecretsClient
from IPython.display import Image, clear_output, display # to display images



print(f"Setup complete. Using torch {torch._version_} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")

https://camo.githubusercontent.com/dd842f7b0be57140e68b2ab9cb007992acd131c48284eaf6b1aca758bfea358b/68747470733a2f2f692e696d6775722e636f6d2f52557469567a482e706e67">

I will be integrating W&B for visualizations and logging artifacts and comparisons of different models!

YOLOv7-Car-Person-Custom

try:
  user_secrets = UserSecretsClient()
  wandb_api_key = user_secrets.get_secret("wandb_api")
  wandb.login(key=wandb_api_key)
  anonymous = None
except:
  wandb.login(anonymous='must')
  print('To use your W&B account,
Go to Add-ons -> Secrets and provide your W&B access token. Use the Label name as WANDB. 
Get your W&B access token from here: https://wandb.ai/authorize')
  
  
  
wandb.init(project="YOLOvR",name=f"7. YOLOv7-Car-Person-Custom-Run-7")

Step 2: Assemble Our Dataset

https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png" alt="">

In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv7 format.

In Roboflow, We can choose between two paths:

Version v2 Aug 12, 2022 Looks like this.

https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/Roboflow.PNG" alt="">

user_secrets = UserSecretsClient()
roboflow_api_key = user_secrets.get_secret("roboflow_api")
rf = Roboflow(api_key=roboflow_api_key)
project = rf.workspace("owais-ahmad").project("custom-yolov7-on-kaggle-on-custom-dataset-rakiq")
dataset = project.version(2).download("yolov7")

Step 3: Training Custom pretrained YOLOv7 model

Here, I am able to pass a number of arguments: - img: define input image size - batch: determine

Search
Clear search
Close search
Google apps
Main menu