28 datasets found
  1. R

    Custom Yolov7 On Kaggle On Custom Dataset

    • universe.roboflow.com
    zip
    Updated Jan 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Owais Ahmad (2023). Custom Yolov7 On Kaggle On Custom Dataset [Dataset]. https://universe.roboflow.com/owais-ahmad/custom-yolov7-on-kaggle-on-custom-dataset-rakiq/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 29, 2023
    Dataset authored and provided by
    Owais Ahmad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Person Car Bounding Boxes
    Description

    Custom Training with YOLOv7 🔥

    Some Important links

    Contact Information

    Objective

    To Showcase custom Object Detection on the Given Dataset to train and Infer the Model using newly launched YoloV7.

    Data Acquisition

    The goal of this task is to train a model that can localize and classify each instance of Person and Car as accurately as possible.

    from IPython.display import Markdown, display
    
    display(Markdown("../input/Car-Person-v2-Roboflow/README.roboflow.txt"))
    

    Custom Training with YOLOv7 🔥

    In this Notebook, I have processed the images with RoboFlow because in COCO formatted dataset was having different dimensions of image and Also data set was not splitted into different Format. To train a custom YOLOv7 model we need to recognize the objects in the dataset. To do so I have taken the following steps:

    • Export the dataset to YOLOv7
    • Train YOLOv7 to recognize the objects in our dataset
    • Evaluate our YOLOv7 model's performance
    • Run test inference to view performance of YOLOv7 model at work

    📦 YOLOv7

    https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/car-person-2.PNG" width=800>

    Image Credit - jinfagang

    Step 1: Install Requirements

    !git clone https://github.com/WongKinYiu/yolov7 # Downloading YOLOv7 repository and installing requirements
    %cd yolov7
    !pip install -qr requirements.txt
    !pip install -q roboflow
    

    Downloading YOLOV7 starting checkpoint

    !wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt"
    
    import os
    import glob
    import wandb
    import torch
    from roboflow import Roboflow
    from kaggle_secrets import UserSecretsClient
    from IPython.display import Image, clear_output, display # to display images
    
    
    
    print(f"Setup complete. Using torch {torch._version_} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
    

    https://camo.githubusercontent.com/dd842f7b0be57140e68b2ab9cb007992acd131c48284eaf6b1aca758bfea358b/68747470733a2f2f692e696d6775722e636f6d2f52557469567a482e706e67">

    I will be integrating W&B for visualizations and logging artifacts and comparisons of different models!

    YOLOv7-Car-Person-Custom

    try:
      user_secrets = UserSecretsClient()
      wandb_api_key = user_secrets.get_secret("wandb_api")
      wandb.login(key=wandb_api_key)
      anonymous = None
    except:
      wandb.login(anonymous='must')
      print('To use your W&B account,
    Go to Add-ons -> Secrets and provide your W&B access token. Use the Label name as WANDB. 
    Get your W&B access token from here: https://wandb.ai/authorize')
      
      
      
    wandb.init(project="YOLOvR",name=f"7. YOLOv7-Car-Person-Custom-Run-7")
    

    Step 2: Assemble Our Dataset

    https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png" alt="">

    In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv7 format.

    In Roboflow, We can choose between two paths:

    Version v2 Aug 12, 2022 Looks like this.

    https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/Roboflow.PNG" alt="">

    user_secrets = UserSecretsClient()
    roboflow_api_key = user_secrets.get_secret("roboflow_api")
    
    rf = Roboflow(api_key=roboflow_api_key)
    project = rf.workspace("owais-ahmad").project("custom-yolov7-on-kaggle-on-custom-dataset-rakiq")
    dataset = project.version(2).download("yolov7")
    

    Step 3: Training Custom pretrained YOLOv7 model

    Here, I am able to pass a number of arguments: - img: define input image size - batch: determine

  2. R

    Kitti For Training Yolov7 Dataset

    • universe.roboflow.com
    zip
    Updated Dec 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Disals Academia Workplace (2022). Kitti For Training Yolov7 Dataset [Dataset]. https://universe.roboflow.com/disals-academia-workplace/kitti-dataset-for-training-yolov7
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 20, 2022
    Dataset authored and provided by
    Disals Academia Workplace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Vehicles Pedestrians Bounding Boxes
    Description

    KITTI Dataset For Training YOLOv7

    ## Overview
    
    KITTI Dataset For Training YOLOv7 is a dataset for object detection tasks - it contains Vehicles Pedestrians annotations for 7,481 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. R

    Train Test Split For Freiburg In Yolov7 Format Dataset

    • universe.roboflow.com
    zip
    Updated Aug 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isaac H (2023). Train Test Split For Freiburg In Yolov7 Format Dataset [Dataset]. https://universe.roboflow.com/isaac-h/train-test-split-for-freiburg-dataset-in-yolov7-format
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 4, 2023
    Dataset authored and provided by
    Isaac H
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Freiburg im Breisgau
    Variables measured
    Groceries Bounding Boxes
    Description

    Train Test Split For Freiburg Dataset In YOLOv7 Format

    ## Overview
    
    Train Test Split For Freiburg Dataset In YOLOv7 Format is a dataset for object detection tasks - it contains Groceries annotations for 8,879 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. R

    Yolo V7 Train Dataset

    • universe.roboflow.com
    zip
    Updated Oct 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taipei Tech Electronic Engineering (2023). Yolo V7 Train Dataset [Dataset]. https://universe.roboflow.com/taipei-tech-electronic-engineering/yolo-v7-train/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 10, 2023
    Dataset authored and provided by
    Taipei Tech Electronic Engineering
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Lego Bounding Boxes
    Description

    Yolo V7 Train

    ## Overview
    
    Yolo V7 Train is a dataset for object detection tasks - it contains Lego annotations for 1,823 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. o

    Weights of the MegaWeeds dataset for YOLO

    • explore.openaire.eu
    Updated Jul 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S. Wildeboer (2023). Weights of the MegaWeeds dataset for YOLO [Dataset]. http://doi.org/10.5281/zenodo.8107076
    Explore at:
    Dataset updated
    Jul 3, 2023
    Authors
    S. Wildeboer
    Description

    The MegaWeeds dataset has been used for pre-training YOLO from scratch. The weights obtained with the code below is MW_weights.pt !python train.py --device 0 --batch-size 8 --epochs 300 --img 864 864 --multiscale –data data/custom_data.yaml --hyp data/hyp.scratch.custom.yaml --cfg cfg/training/yolov7-custom.yaml --weights “ ” --name save_name

  6. m

    CNN training of satellite images for "Detection and tracking barchan dunes...

    • data.mendeley.com
    Updated Mar 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esteban Cunez (2024). CNN training of satellite images for "Detection and tracking barchan dunes using Artificial Intelligence" [Dataset]. http://doi.org/10.17632/v4yntwdnjk.2
    Explore at:
    Dataset updated
    Mar 25, 2024
    Authors
    Esteban Cunez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is part of the dataset concerning the YOLO training of satellite images (Barchan dunes). In this dataset, you find the "YOLOv8 train" folder that contains the structure and images obtained from HiRISE, CTX global mosaic, Google Earth Pro, and Copernicus. We saved the images with the HiView, Google Earth Pro, and Copernicus software to train a CNN with images of barchan dunes, the "Train Results" folder that contains the figures and weights of YOLO detection of barchan dunes, the "Earth detections" that contains some barchan dune detections on different locations of Earth, the "Mars detections" that contains some barchan dune detections on different locations of Mars, and the "Code Files" that contains the scripts to detect barchan dunes, train a YOLOv8, convert masks to polygons and plot resulting YOLO parameters.

  7. R

    Yolov7 Seg Test Dataset

    • universe.roboflow.com
    zip
    Updated Mar 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Train (2024). Yolov7 Seg Test Dataset [Dataset]. https://universe.roboflow.com/train-earve/yolov7-seg-test-fxt3v
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 25, 2024
    Dataset authored and provided by
    Train
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Person 07zm Polygons
    Description

    Yolov7 Seg Test

    ## Overview
    
    Yolov7 Seg Test is a dataset for instance segmentation tasks - it contains Person 07zm annotations for 338 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  8. f

    Image datasets for training, validating and testing deep learning models to...

    • figshare.com
    bin
    Updated May 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kazuhide Mimura; Kentaro Nakamura; Kazutaka Yasukawa; Elizabeth C Sibert; Junichiro Ohta; Takahiro Kitazawa; Yasuhiro Kato (2023). Image datasets for training, validating and testing deep learning models to detect microfossil fish teeth and denticles called ichthyolith using YOLOv7 [Dataset]. http://doi.org/10.6084/m9.figshare.22736609.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 10, 2023
    Dataset provided by
    figshare
    Authors
    Kazuhide Mimura; Kentaro Nakamura; Kazutaka Yasukawa; Elizabeth C Sibert; Junichiro Ohta; Takahiro Kitazawa; Yasuhiro Kato
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains datasets to train, validate and test deep learning models to detect microfossil fish teeth and denticles called "ichthyoliths". All the dataset contains images of glass slides prepared from deep-sea sediment obtained from Pacific Ocean, and annotation label files formatted to YOLO. 01_original_all The dataset contains 12219 images and 6945 label files. 6945 images include at least one ichthyolith and 5274 images include no ichthyolith. These images and label files were randomly split into three subset, "train" that contains 9740 images and 5551 label files, "val" that contains 1235 images and 695 label files and "test" that contains 1244 images and 699 label files. All the images were selected manually. 02_original_selected This dataset is generated from 01_original_all by removing images without ichthyoliths. The dataset contains 6945 images that include at least one ichthyolith and 6945 label files. The dataset contains three subset, "train" that contains 5551 images and label files, "val" that contains 695 images and label files and "test" that contains 699 images and label files. 03_extended_all This dataset is generated from 01_original_all by adding 4463 images detected by deep learning models. The dataset contains 16682 images and 9473 label files. 9473 images include at least one ichthyolith and 7209 images include no ichthyolith. These images and label files were split into three subset, "train" that contains 13332 images and 7594 label files, "val" that contains 1690 images and 947 label files and "test" that contains 1660 images and 932 label files. Label files were checked manually. 04_extended_selected This dataset is generated from 03_extended_all by removing images without ichthyoliths. The dataset contains 9473 images that include at least one ichthyolith and 9473 label files. The dataset contains three subset, "train" that contains 7594 images and label files, "val" that contains 947 images and label files and "test" that contains 932 images and label files.

  9. Z

    Multi-Altitude Aerial Vehicles Dataset

    • data.niaid.nih.gov
    • data.europa.eu
    Updated Apr 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panayiotis Kolios (2023). Multi-Altitude Aerial Vehicles Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7736335
    Explore at:
    Dataset updated
    Apr 5, 2023
    Dataset provided by
    Panayiotis Kolios
    Christos Kyrkou
    Rafael Makrigiorgis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Custom Multi-Altitude Aerial Vehicles Dataset:

    Created for publishing results for ICUAS 2023 paper "How High can you Detect? Improved accuracy and efficiency at varying altitudes for Aerial Vehicle Detection", following the abstract of the paper.

    Abstract—Object detection in aerial images is a challenging task mainly because of two factors, the objects of interest being really small, e.g. people or vehicles, making them indistinguishable from the background; and the features of objects being quite different at various altitudes. Especially, when utilizing Unmanned Aerial Vehicles (UAVs) to capture footage, the need for increased altitude to capture a larger field of view is quite high. In this paper, we investigate how to find the best solution for detecting vehicles in various altitudes, while utilizing a single CNN model. The conditions for choosing the best solution are the following; higher accuracy for most of the altitudes and real-time processing ( > 20 Frames per second (FPS) ) on an Nvidia Jetson Xavier NX embedded device. We collected footage of moving vehicles from altitudes of 50-500 meters with a 50-meter interval, including a roundabout and rooftop objects as noise for high altitude challenges. Then, a YoloV7 model was trained on each dataset of each altitude along with a dataset including all the images from all the altitudes. Finally, by conducting several training and evaluation experiments and image resizes we have chosen the best method of training objects on multiple altitudes to be the mixup dataset with all the altitudes, trained on a higher image size resolution, and then performing the detection using a smaller image resize to reduce the inference performance. The main results

    The creation of a custom dataset was necessary for altitude evaluation as no other datasets were available. To fulfill the requirements, the footage was captured using a small UAV hovering above a roundabout near the University of Cyprus campus, where several structures and buildings with solar panels and water tanks were visible at varying altitudes. The data were captured during a sunny day, ensuring bright and shadowless images. Images were extracted from the footage, and all data were annotated with a single class labeled as 'Car'. The dataset covered altitudes ranging from 50 to 500 meters with a 50-meter step, and all images were kept at their original high resolution of 3840x2160, presenting challenges for object detection. The data were split into 3 sets for training, validation, and testing, with the number of vehicles increasing as altitude increased, which was expected due to the larger field of view of the camera. Each folder consists of an aerial vehicle dataset captured at the corresponding altitude. For each altitude, the dataset annotations are generated in YOLO, COCO, and VOC formats. The dataset consists of the following images and detection objects:

        Data
        Subset
        Images
        Cars
    
    
        50m
        Train
        130
        269
    
    
        50m
        Test
        32
        66
    
    
        50m
        Valid
        33
        73
    
    
        100m
        Train
        246
        937
    
    
        100m
        Test
        61
        226
    
    
        100m
        Valid
        62
        250
    
    
        150m
        Train
        244
        1691
    
    
        150m
        Test
        61
        453
    
    
        150m
        Valid
        61
        426
    
    
        200m
        Train
        246
        1753
    
    
        200m
        Test
        61
        445
    
    
        200m
        Valid
        62
        424
    
    
        250m
        Train
        245
        3326
    
    
        250m
        Test
        61
        821
    
    
        250m
        Valid
        61
        823
    
    
        300m
        Train
        246
        6250
    
    
        300m
        Test
        61
        1553
    
    
        300m
        Valid
        62
        1585
    
    
        350m
        Train
        246
        10741
    
    
        350m
        Test
        61
        2591
    
    
        350m
        Valid
        62
        2687
    
    
        400m
        Train
        245
        20072
    
    
        400m
        Test
        61
        4974
    
    
        400m
        Valid
        61
        4924
    
    
        450m
        Train
        246
        31794
    
    
        450m
        Test
        61
        7887
    
    
        450m
        Valid
        61
        7880
    
    
        500m
        Train
        270
        49782
    
    
        500m
        Test
        67
        12426
    
    
        500m
        Valid
        68
        12541
    
    
        mix_alt
        Train
        2364
        126615
    
    
        mix_alt
        Test
        587
        31442
    
    
        mix_alt
        Valid
        593
        31613
    

    It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).

  10. m

    Segmented Dataset Based on YOLOv7 for Drone vs. Bird Identification for Deep...

    • data.mendeley.com
    Updated Feb 20, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aditya Srivastav (2023). Segmented Dataset Based on YOLOv7 for Drone vs. Bird Identification for Deep and Machine Learning Algorithms [Dataset]. http://doi.org/10.17632/6ghdz52pd7.3
    Explore at:
    Dataset updated
    Feb 20, 2023
    Authors
    Aditya Srivastav
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Unmanned aerial vehicles (UAVs) have become increasingly popular in recent years for both commercial and recreational purposes. Regrettably, the security of people and infrastructure is also clearly threatened by this increased demand. To address the current security challenge, much research has been carried out and several innovations have been made. Many faults still exist, however, including type or range detection failures and the mistaken identification of other airborne objects (for example, birds). A standard dataset that contains photos of drones and birds and on which the model might be trained for greater accuracy is needed to conduct experiments in this field. The supplied dataset is crucial since it will help train the model, giving it the ability to learn more accurately and make better decisions. The dataset that is being presented is comprised of a diverse range of images of birds and drones in motion. Pexel website's images and videos have been used to construct the dataset. Images were obtained from the frames of the recordings that were acquired, after which they were segmented and augmented with a range of circumstances. This would improve the machine-learning model's detection accuracy while increasing dataset training. The dataset has been formatted according to the YOLOv7 PyTorch specification. The test, train, and valid folders are contained within the given dataset. These folders each feature a plaintext file that corresponds to an associated image. Relevant metadata regarding the discovered object is described in the plaintext file. Images and labels are the two subfolders that constitute the folders. The collection consists of 20,925 images of birds and drones. The images have a 640 x 640 pixel resolution and are stored in JPEG format.

  11. R

    Cane Train Licence Plate Dataset

    • universe.roboflow.com
    zip
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yolov7 CaneTrain (2025). Cane Train Licence Plate Dataset [Dataset]. https://universe.roboflow.com/yolov7-canetrain/cane-train-licence-plate
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 8, 2025
    Dataset authored and provided by
    yolov7 CaneTrain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Plate Bounding Boxes
    Description

    Cane Train Licence Plate

    ## Overview
    
    Cane Train Licence Plate is a dataset for object detection tasks - it contains Plate annotations for 1,931 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. Detection of Areas with Human Vulnerability Using Public Satellite Images...

    • zenodo.org
    zip
    Updated Sep 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Flavio de Barros Vidal; Flavio de Barros Vidal (2024). Detection of Areas with Human Vulnerability Using Public Satellite Images and Deep Learning (Dataset) [Dataset]. http://doi.org/10.5281/zenodo.13768463
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Flavio de Barros Vidal; Flavio de Barros Vidal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2023
    Description

    Overview

    This repository contains the code and resources for the project titled "Detection of Areas with Human Vulnerability Using Public Satellite Images and Deep Learning". The goal of this project is to identify regions where individuals are living under precarious conditions and facing neglected basic needs, a situation often seen in Brazil. This concept is referred to as "human vulnerability" and is exemplified by families living in inadequate shelters or on the streets in both urban and rural areas.

    Focusing on the Federal District of Brazil as the research area, this project aims to develop two novel public datasets consisting of satellite images. The datasets contain imagery captured at 50m and 100m scales, covering regions of human vulnerability, traditional areas, and improperly disposed waste sites.

    The project also leverages these datasets for training deep learning models, including YOLOv7 and other state-of-the-art models, to perform image segmentation. A comparative analysis is conducted between the models using two training strategies: training from scratch with random weight initialization and fine-tuning using pre-trained weights through transfer learning.

    Key Achievements

    • Two new satellite image datasets focusing on human vulnerability and improperly disposed waste sites, available in public domains.
    • Comparison of image segmentation models, including YOLOv7 and Segmentation Models, with performance metrics.
    • Best F1-scores: 0.55 for YOLOv7 and 0.64 for Segmentation Models.

    This repository provides the code, models, and data pipelines used for training, evaluation, and performance comparison of these deep learning models.

    Citation (Bibtex)

    @TECHREPORT {TechReport-Julia-Laura-HumanVulnerability-2024,
      author   = "Julia Passos Pontes, Laura Maciel Neves Franco, Flavio De Barros Vidal",
      title    = "Detecção de Áreas com Atividades de Vulnerabilidade Humana utilizando Imagens Públicas de Satélites e Aprendizagem Profunda",
      institution = "University of Brasilia",
      year    = "2024",
      type    = "Undergraduate Thesis",
      address   = "Computer Science Department - University of Brasilia - Asa Norte - Brasilia - DF, Brazil",
      month    = "aug",
      note    = "People living in precarious conditions and with their basic needs neglected is an unfortunate reality in Brazil. This scenario will be approached in this work according to the concept of \"human vulnerability\" and can be exemplified through families who live in inadequate shelters, without basic structures and on the streets of urban or rural centers. Therefore, assuming the Federal District as the research scope, this project proposes to develop two new databases to be made available publicly, considering the map scales of 50m and 100m, and composed by satellite images of human vulnerability areas,
    regions treated as traditional and waste disposed inadequately. Furthermore, using these image bases, trainings were done with the YOLOv7 model and other deep learning models for image segmentation. By adopting an exploratory approach, this work compares the results of different image segmentation models and training strategies, using random weight initialization
    (from scratch) and pre-trained weights (transfer learning). Thus, the present work was able to reach maximum F1
    score values of 0.55 for YOLOv7 and 0.64 for other segmentation models."
    }
    

    License

    This project is licensed under the MIT License - see the LICENSE file for details.

  13. m

    Fly and Mosquito detection using YOLO

    • data.mendeley.com
    Updated Jul 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shoeb Ahmad Shamim (2025). Fly and Mosquito detection using YOLO [Dataset]. http://doi.org/10.17632/77hr7mxd3h.1
    Explore at:
    Dataset updated
    Jul 31, 2025
    Authors
    Shoeb Ahmad Shamim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datset was created for fly and mosquito detection using Yolov8. This file contains 1764 image file and each image file contains text file. This file caries labeling each image

  14. RGB Image Pine-seedling Dataset: Three Population with half-sib structure,...

    • figshare.com
    zip
    Updated Jun 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiri Chuchlík; Jaroslav Čepl; Eva Neuwirthová; Jan Stejskal; Jiří Korecký (2025). RGB Image Pine-seedling Dataset: Three Population with half-sib structure, dataset for segmentation model training and data of mean seedlings' color [Dataset]. http://doi.org/10.6084/m9.figshare.28239326.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 19, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Jiri Chuchlík; Jaroslav Čepl; Eva Neuwirthová; Jan Stejskal; Jiří Korecký
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets contain RGB photos of Scots pine seedlings of three populations from two different ecotypes originating in the Czech Republic:Plasy - lowland ecotype,Trebon - lowland ecotype,Decin - upland ecotype.These photos were taken in three different periods (September 10th 2021, October 23rd 2021, January 22nd 2022).File dataset_for_YOLOv7_training.zip contains image data with annotations for training YOLOv7 segmentation model (training and validation sets)The dataset also contains a table with information on individual Scots pine seedlings:affiliation to parent tree (mum)affiliation to population (site)row and column in which the seedling was grown (row, col)affiliation to the planter in which the seedling was grown (box)mean RGB values of pine seedling in three different periods (B_september, G_september, R_september B_october, G_october, R_october, B_january, G_january, R_january)mean HSV values of pine seedling in three different periods (H_september, S_september, V_september, H_october, S_october, V_october, H_january, S_january, V_january)

  15. Fusion of Underwater Camera and Multibeam Sonar for Diver Detection and...

    • zenodo.org
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Onurcan Köken; Bilal Wehbe; Bilal Wehbe; Onurcan Köken (2025). Fusion of Underwater Camera and Multibeam Sonar for Diver Detection and Tracking [Dataset]. http://doi.org/10.5281/zenodo.10220989
    Explore at:
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Onurcan Köken; Bilal Wehbe; Bilal Wehbe; Onurcan Köken
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 2023
    Description
    Context
    This dataset is related to previously published public dataset "Sonar-to-RGB Image Translation for Diver Monitoring in Poor Visibility Environments". https://zenodo.org/records/7728089
    It contains ZED-right camera and sonar images collected from Hemmoor Lake and DFKI Maritime Exploration Hall.
    Sensors: Low Frq (1.2MHz) Blueprint Oculus M1200d Sonar and ZED Right Camera
    Content
    The dataset is created for Diver Detection and Diver Tracking applications.
    For the Diver Detection part, the dataset is prepared to train, validate and test YOLOv7 model.
    7095 images are used for training data, and 3095 images are used for validation data. These sets are augmented from originally captured and sampled ZED camera images. Augmentation methods are not applied to the Test data, which contains 822 images. Train and validation contain images from both the DFKI pool and Hemmor Lake, while the test data is only collected from the lake.
    To distinguish between the original image and the augmented image, check the name coding.
    Naming of object detection images:
    original_image_name.jpg
    if augmented:
    original_image_name_
    Object Detection Label Format:
    YOLO [(class), ((x_min + (x_max - x_min)/2) / image_width), ((y_min + (y_max - y_min)/2) / image_height), ((x_max - x_min) / image_width), ((y_max - y_min) / image_height)]
    Class: "diver", represented by "0" in object detection labels.
    Resolution of Object Detection Camera Images: 640x640
    Resolution of Object Tracking Camera Images: 1280x720
    Resolution of Object Tracking Low Frequency Sonar: 932x514
    About the Object Tracking on Sonar, the sampled data is the part where diver moves around the table and the platform.
    There are 4 cases shared in the dataset, which contain a sonar stream, and corresponding ZED-right camera images.
    Totally, 1193 points represent the diver on sonar images for the diver tracking application.
    For the tracking, "tracking_sonar_coordinates_
    And "image_sonar_
    Acknowledgements
    The data in this repository were collected as a joint effort between the German Center for Artificial Intelligence (DFKI), the German Federal Agency for technical Relief (THW), and Kraken Robotics GmbH. This work is part of the project DeeperSense that received funding from the European Commission. Program H2020-ICT-2020-2 ICT-47-2020 Project Number: 101016958.

  16. R

    Val Creation For Coco + Landing Pad Image Dataset Dataset

    • universe.roboflow.com
    zip
    Updated Feb 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UWARG YOLOv7 (2023). Val Creation For Coco + Landing Pad Image Dataset Dataset [Dataset]. https://universe.roboflow.com/uwarg-yolov7/old-train-val-dataset-creation-for-coco-landing-pad-image-dataset/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 12, 2023
    Dataset authored and provided by
    UWARG YOLOv7
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    COCO And LandingPads Bounding Boxes
    Description

    Val Dataset Creation For COCO + Landing Pad Image Dataset

    ## Overview
    
    Val Dataset Creation For COCO + Landing Pad Image Dataset is a dataset for object detection tasks - it contains COCO And LandingPads annotations for 1,852 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  17. D

    Safety Helmet and Reflective Jacket Dataset

    • datasetninja.com
    Updated Oct 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nirav B Naik (2023). Safety Helmet and Reflective Jacket Dataset [Dataset]. https://datasetninja.com/safety-helmet-and-reflective-jacket
    Explore at:
    Dataset updated
    Oct 25, 2023
    Dataset provided by
    Dataset Ninja
    Authors
    Nirav B Naik
    License

    https://www.apache.org/licenses/LICENSE-2.0https://www.apache.org/licenses/LICENSE-2.0

    Description

    The Safety Helmet and Reflective Jacket dataset contains 10,500 images that have been annotated with bounding boxes for two vital object classes: safety_helmet and reflective_jacket. The main objective behind this dataset is to facilitate the training of an object detection model using the YOLOv7 architecture to accurately identify and locate safety equipment within a diverse array of settings and environments. To ensure effective model development and evaluation, the dataset has been divided into train, test, and val subsets, maintaining a balanced ratio of 70% for training and 15% each for testing and validation, resulting in a comprehensive 100% split.

  18. R

    Bd Step2 V1_3 Train Dataset

    • universe.roboflow.com
    zip
    Updated Mar 3, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YOLOV7 (2023). Bd Step2 V1_3 Train Dataset [Dataset]. https://universe.roboflow.com/yolov7-twb3s/bd-step2-v1_3-train/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 3, 2023
    Dataset authored and provided by
    YOLOV7
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    3 Train (7 Av Express)
    Variables measured
    Coal Rock Polygons
    Description

    BD Step2 V1_3 Train

    ## Overview
    
    BD Step2 V1_3 Train is a dataset for instance segmentation tasks - it contains Coal Rock annotations for 755 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  19. R

    Custom_dataset(person,car,bus,motorcycle) Dataset

    • universe.roboflow.com
    zip
    Updated Dec 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdulghani M Abdulghani (2023). Custom_dataset(person,car,bus,motorcycle) Dataset [Dataset]. https://universe.roboflow.com/abdulghani-m-abdulghani/custom-dataset-for-pedestrians-and-automobile-detection/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 21, 2023
    Dataset authored and provided by
    Abdulghani M Abdulghani
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Objects Collection Bounding Boxes
    Description

    This dataset containe over 10,000 images for (Person, Car, Bus, Motorcycle) I the dataset have been collected from Open-Image dataset. I used this dataset to train YOLOv7 and able to get mAP@50 76, Precision 71, and F1-Score 74, with 80 Epoches with local GPU Nivida RTX 3080 Laptop GPU.

  20. f

    Effectiveness of the basic YOLOv7 model before data curation. The basic...

    • plos.figshare.com
    xls
    Updated Jul 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Aliy Mohammed; Esla Timothy Anzaku; Peter Kenneth Ward; Bruno Levecke; Janarthanan Krishnamoorthy; Wesley De Neve; Sofie Van Hoecke (2025). Effectiveness of the basic YOLOv7 model before data curation. The basic YOLOv7 model was fine-tuned and evaluated using the P1.5 dataset (before it underwent thorough data curation). This model was employed to subjectively inspect FP and FN predictions originating from the training, validation, and test sets. Both precision and recall values were calculated using an IoU threshold of 0.5, along with the confidence score corresponding to the highest F1-score. [Dataset]. http://doi.org/10.1371/journal.pntd.0013234.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset provided by
    PLOS Neglected Tropical Diseases
    Authors
    Mohammed Aliy Mohammed; Esla Timothy Anzaku; Peter Kenneth Ward; Bruno Levecke; Janarthanan Krishnamoorthy; Wesley De Neve; Sofie Van Hoecke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Effectiveness of the basic YOLOv7 model before data curation. The basic YOLOv7 model was fine-tuned and evaluated using the P1.5 dataset (before it underwent thorough data curation). This model was employed to subjectively inspect FP and FN predictions originating from the training, validation, and test sets. Both precision and recall values were calculated using an IoU threshold of 0.5, along with the confidence score corresponding to the highest F1-score.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Owais Ahmad (2023). Custom Yolov7 On Kaggle On Custom Dataset [Dataset]. https://universe.roboflow.com/owais-ahmad/custom-yolov7-on-kaggle-on-custom-dataset-rakiq/dataset/2

Custom Yolov7 On Kaggle On Custom Dataset

custom-yolov7-on-kaggle-on-custom-dataset

custom-yolov7-on-kaggle-on-custom-dataset-rakiq

Explore at:
zipAvailable download formats
Dataset updated
Jan 29, 2023
Dataset authored and provided by
Owais Ahmad
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Person Car Bounding Boxes
Description

Custom Training with YOLOv7 🔥

Some Important links

Contact Information

Objective

To Showcase custom Object Detection on the Given Dataset to train and Infer the Model using newly launched YoloV7.

Data Acquisition

The goal of this task is to train a model that can localize and classify each instance of Person and Car as accurately as possible.

from IPython.display import Markdown, display

display(Markdown("../input/Car-Person-v2-Roboflow/README.roboflow.txt"))

Custom Training with YOLOv7 🔥

In this Notebook, I have processed the images with RoboFlow because in COCO formatted dataset was having different dimensions of image and Also data set was not splitted into different Format. To train a custom YOLOv7 model we need to recognize the objects in the dataset. To do so I have taken the following steps:

  • Export the dataset to YOLOv7
  • Train YOLOv7 to recognize the objects in our dataset
  • Evaluate our YOLOv7 model's performance
  • Run test inference to view performance of YOLOv7 model at work

📦 YOLOv7

https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/car-person-2.PNG" width=800>

Image Credit - jinfagang

Step 1: Install Requirements

!git clone https://github.com/WongKinYiu/yolov7 # Downloading YOLOv7 repository and installing requirements
%cd yolov7
!pip install -qr requirements.txt
!pip install -q roboflow

Downloading YOLOV7 starting checkpoint

!wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt"
import os
import glob
import wandb
import torch
from roboflow import Roboflow
from kaggle_secrets import UserSecretsClient
from IPython.display import Image, clear_output, display # to display images



print(f"Setup complete. Using torch {torch._version_} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")

https://camo.githubusercontent.com/dd842f7b0be57140e68b2ab9cb007992acd131c48284eaf6b1aca758bfea358b/68747470733a2f2f692e696d6775722e636f6d2f52557469567a482e706e67">

I will be integrating W&B for visualizations and logging artifacts and comparisons of different models!

YOLOv7-Car-Person-Custom

try:
  user_secrets = UserSecretsClient()
  wandb_api_key = user_secrets.get_secret("wandb_api")
  wandb.login(key=wandb_api_key)
  anonymous = None
except:
  wandb.login(anonymous='must')
  print('To use your W&B account,
Go to Add-ons -> Secrets and provide your W&B access token. Use the Label name as WANDB. 
Get your W&B access token from here: https://wandb.ai/authorize')
  
  
  
wandb.init(project="YOLOvR",name=f"7. YOLOv7-Car-Person-Custom-Run-7")

Step 2: Assemble Our Dataset

https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png" alt="">

In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv7 format.

In Roboflow, We can choose between two paths:

Version v2 Aug 12, 2022 Looks like this.

https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/Roboflow.PNG" alt="">

user_secrets = UserSecretsClient()
roboflow_api_key = user_secrets.get_secret("roboflow_api")
rf = Roboflow(api_key=roboflow_api_key)
project = rf.workspace("owais-ahmad").project("custom-yolov7-on-kaggle-on-custom-dataset-rakiq")
dataset = project.version(2).download("yolov7")

Step 3: Training Custom pretrained YOLOv7 model

Here, I am able to pass a number of arguments: - img: define input image size - batch: determine

Search
Clear search
Close search
Google apps
Main menu