8 datasets found
  1. R

    Object Detection Yolov7 New Labels 19.01.23 Dataset

    • universe.roboflow.com
    zip
    Updated Jan 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IMMmaster (2023). Object Detection Yolov7 New Labels 19.01.23 Dataset [Dataset]. https://universe.roboflow.com/immmaster/object-detection-yolov7-new-labels-19.01.23/dataset/10
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 27, 2023
    Dataset authored and provided by
    IMMmaster
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Damages Bounding Boxes
    Description

    Object Detection YOLOv7 New Labels 19.01.23

    ## Overview
    
    Object Detection YOLOv7 New Labels 19.01.23 is a dataset for object detection tasks - it contains Damages annotations for 766 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. R

    Label Ayam Dataset

    • universe.roboflow.com
    zip
    Updated Nov 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yolov7 (2022). Label Ayam Dataset [Dataset]. https://universe.roboflow.com/yolov7-nzkjh/label-ayam
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 29, 2022
    Dataset authored and provided by
    Yolov7
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    ST Bounding Boxes
    Description

    Label Ayam

    ## Overview
    
    Label Ayam is a dataset for object detection tasks - it contains ST annotations for 373 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. R

    Custom Yolov7 On Kaggle On Custom Dataset

    • universe.roboflow.com
    zip
    Updated Jan 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Owais Ahmad (2023). Custom Yolov7 On Kaggle On Custom Dataset [Dataset]. https://universe.roboflow.com/owais-ahmad/custom-yolov7-on-kaggle-on-custom-dataset-rakiq/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 29, 2023
    Dataset authored and provided by
    Owais Ahmad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Person Car Bounding Boxes
    Description

    Custom Training with YOLOv7 🔥

    Some Important links

    Contact Information

    Objective

    To Showcase custom Object Detection on the Given Dataset to train and Infer the Model using newly launched YoloV7.

    Data Acquisition

    The goal of this task is to train a model that can localize and classify each instance of Person and Car as accurately as possible.

    from IPython.display import Markdown, display
    
    display(Markdown("../input/Car-Person-v2-Roboflow/README.roboflow.txt"))
    

    Custom Training with YOLOv7 🔥

    In this Notebook, I have processed the images with RoboFlow because in COCO formatted dataset was having different dimensions of image and Also data set was not splitted into different Format. To train a custom YOLOv7 model we need to recognize the objects in the dataset. To do so I have taken the following steps:

    • Export the dataset to YOLOv7
    • Train YOLOv7 to recognize the objects in our dataset
    • Evaluate our YOLOv7 model's performance
    • Run test inference to view performance of YOLOv7 model at work

    📦 YOLOv7

    https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/car-person-2.PNG" width=800>

    Image Credit - jinfagang

    Step 1: Install Requirements

    !git clone https://github.com/WongKinYiu/yolov7 # Downloading YOLOv7 repository and installing requirements
    %cd yolov7
    !pip install -qr requirements.txt
    !pip install -q roboflow
    

    Downloading YOLOV7 starting checkpoint

    !wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt"
    
    import os
    import glob
    import wandb
    import torch
    from roboflow import Roboflow
    from kaggle_secrets import UserSecretsClient
    from IPython.display import Image, clear_output, display # to display images
    
    
    
    print(f"Setup complete. Using torch {torch._version_} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
    

    https://camo.githubusercontent.com/dd842f7b0be57140e68b2ab9cb007992acd131c48284eaf6b1aca758bfea358b/68747470733a2f2f692e696d6775722e636f6d2f52557469567a482e706e67">

    I will be integrating W&B for visualizations and logging artifacts and comparisons of different models!

    YOLOv7-Car-Person-Custom

    try:
      user_secrets = UserSecretsClient()
      wandb_api_key = user_secrets.get_secret("wandb_api")
      wandb.login(key=wandb_api_key)
      anonymous = None
    except:
      wandb.login(anonymous='must')
      print('To use your W&B account,
    Go to Add-ons -> Secrets and provide your W&B access token. Use the Label name as WANDB. 
    Get your W&B access token from here: https://wandb.ai/authorize')
      
      
      
    wandb.init(project="YOLOvR",name=f"7. YOLOv7-Car-Person-Custom-Run-7")
    

    Step 2: Assemble Our Dataset

    https://uploads-ssl.webflow.com/5f6bc60e665f54545a1e52a5/615627e5824c9c6195abfda9_computer-vision-cycle.png" alt="">

    In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv7 format.

    In Roboflow, We can choose between two paths:

    Version v2 Aug 12, 2022 Looks like this.

    https://raw.githubusercontent.com/Owaiskhan9654/Yolo-V7-Custom-Dataset-Train-on-Kaggle/main/Roboflow.PNG" alt="">

    user_secrets = UserSecretsClient()
    roboflow_api_key = user_secrets.get_secret("roboflow_api")
    
    rf = Roboflow(api_key=roboflow_api_key)
    project = rf.workspace("owais-ahmad").project("custom-yolov7-on-kaggle-on-custom-dataset-rakiq")
    dataset = project.version(2).download("yolov7")
    

    Step 3: Training Custom pretrained YOLOv7 model

    Here, I am able to pass a number of arguments: - img: define input image size - batch: determine

  4. f

    Image datasets for training, validating and testing deep learning models to...

    • figshare.com
    bin
    Updated May 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kazuhide Mimura; Kentaro Nakamura; Kazutaka Yasukawa; Elizabeth C Sibert; Junichiro Ohta; Takahiro Kitazawa; Yasuhiro Kato (2023). Image datasets for training, validating and testing deep learning models to detect microfossil fish teeth and denticles called ichthyolith using YOLOv7 [Dataset]. http://doi.org/10.6084/m9.figshare.22736609.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 10, 2023
    Dataset provided by
    figshare
    Authors
    Kazuhide Mimura; Kentaro Nakamura; Kazutaka Yasukawa; Elizabeth C Sibert; Junichiro Ohta; Takahiro Kitazawa; Yasuhiro Kato
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains datasets to train, validate and test deep learning models to detect microfossil fish teeth and denticles called "ichthyoliths". All the dataset contains images of glass slides prepared from deep-sea sediment obtained from Pacific Ocean, and annotation label files formatted to YOLO. 01_original_all The dataset contains 12219 images and 6945 label files. 6945 images include at least one ichthyolith and 5274 images include no ichthyolith. These images and label files were randomly split into three subset, "train" that contains 9740 images and 5551 label files, "val" that contains 1235 images and 695 label files and "test" that contains 1244 images and 699 label files. All the images were selected manually. 02_original_selected This dataset is generated from 01_original_all by removing images without ichthyoliths. The dataset contains 6945 images that include at least one ichthyolith and 6945 label files. The dataset contains three subset, "train" that contains 5551 images and label files, "val" that contains 695 images and label files and "test" that contains 699 images and label files. 03_extended_all This dataset is generated from 01_original_all by adding 4463 images detected by deep learning models. The dataset contains 16682 images and 9473 label files. 9473 images include at least one ichthyolith and 7209 images include no ichthyolith. These images and label files were split into three subset, "train" that contains 13332 images and 7594 label files, "val" that contains 1690 images and 947 label files and "test" that contains 1660 images and 932 label files. Label files were checked manually. 04_extended_selected This dataset is generated from 03_extended_all by removing images without ichthyoliths. The dataset contains 9473 images that include at least one ichthyolith and 9473 label files. The dataset contains three subset, "train" that contains 7594 images and label files, "val" that contains 947 images and label files and "test" that contains 932 images and label files.

  5. m

    Segmented Dataset Based on YOLOv7 for Drone vs. Bird Identification for Deep...

    • data.mendeley.com
    Updated Feb 20, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aditya Srivastav (2023). Segmented Dataset Based on YOLOv7 for Drone vs. Bird Identification for Deep and Machine Learning Algorithms [Dataset]. http://doi.org/10.17632/6ghdz52pd7.3
    Explore at:
    Dataset updated
    Feb 20, 2023
    Authors
    Aditya Srivastav
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Unmanned aerial vehicles (UAVs) have become increasingly popular in recent years for both commercial and recreational purposes. Regrettably, the security of people and infrastructure is also clearly threatened by this increased demand. To address the current security challenge, much research has been carried out and several innovations have been made. Many faults still exist, however, including type or range detection failures and the mistaken identification of other airborne objects (for example, birds). A standard dataset that contains photos of drones and birds and on which the model might be trained for greater accuracy is needed to conduct experiments in this field. The supplied dataset is crucial since it will help train the model, giving it the ability to learn more accurately and make better decisions. The dataset that is being presented is comprised of a diverse range of images of birds and drones in motion. Pexel website's images and videos have been used to construct the dataset. Images were obtained from the frames of the recordings that were acquired, after which they were segmented and augmented with a range of circumstances. This would improve the machine-learning model's detection accuracy while increasing dataset training. The dataset has been formatted according to the YOLOv7 PyTorch specification. The test, train, and valid folders are contained within the given dataset. These folders each feature a plaintext file that corresponds to an associated image. Relevant metadata regarding the discovered object is described in the plaintext file. Images and labels are the two subfolders that constitute the folders. The collection consists of 20,925 images of birds and drones. The images have a 640 x 640 pixel resolution and are stored in JPEG format.

  6. Blue-Ringed Octopus

    • kaggle.com
    Updated Dec 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yusuf Syam (2022). Blue-Ringed Octopus [Dataset]. https://www.kaggle.com/datasets/yusufsyam/blue-ringed-octopus-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 21, 2022
    Dataset provided by
    Kaggle
    Authors
    Yusuf Syam
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset is for object detection task of the blue-ringed octopus (one of the most venomous animals in the world). With this dataset, I hope people can become more familiar with the blue ringed octopus and be aware of its dangers

    I collected the images and labeled them myself (for a competition). I have less experience in collecting datasets, so I cannot guarantee the quality of this dataset. I trained a yolov7 object detection model with this data and got a mean average precision of 0.987 (with an IoU threshold of 0.5) .

    About Datasets:

    • Consists of 316 images with each label in the Pascal Voc format
    • No pre-processing or image augmentation
    • Not separated into train and test
    • To use as an image classification, just delete its xml/label file
    • Made in August-September 2022

    How I Collect the Data:

    I didn't go into the field to take these images, instead I took them from Google, some also from screenshots of some Youtube videos: - https://www.youtube.com/watch?v=MBHjo6UaHzk&t=62s - https://www.youtube.com/watch?v=c4BoYORmgSM - https://www.youtube.com/watch?v=DSdq8XFQdKo - https://www.youtube.com/watch?v=64mY1klkf4I&t=215s - https://www.youtube.com/watch?v=C0DOusbGWbU - https://www.youtube.com/watch?v=mTnmw5o4vRI - https://www.youtube.com/watch?v=bejKAB2Eazw&t=317s - https://www.youtube.com/watch?v=emisZUHJAEA - https://www.youtube.com/watch?v=6b_UYwyWI6E - https://www.youtube.com/watch?v=vVamzP52qwA - https://www.youtube.com/watch?v=3Bt1LvpZ1Oo

    I also played around with the ai text to image generator to create multiple images and manually choose which one is acceptable (r_blue_ringed_octopus_100 - r_blue_ringed_octopus_110 , you can remove it if you want). After collecting the images, I do the labeling my self.

  7. i

    Weapon Detection YOLOv7

    • ieee-dataport.org
    Updated Dec 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammad Zahrawi (2022). Weapon Detection YOLOv7 [Dataset]. https://ieee-dataport.org/documents/weapon-detection-yolov7
    Explore at:
    Dataset updated
    Dec 13, 2022
    Authors
    Mohammad Zahrawi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In total

  8. Fusion of Underwater Camera and Multibeam Sonar for Diver Detection and...

    • zenodo.org
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Onurcan Köken; Bilal Wehbe; Bilal Wehbe; Onurcan Köken (2025). Fusion of Underwater Camera and Multibeam Sonar for Diver Detection and Tracking [Dataset]. http://doi.org/10.5281/zenodo.10220989
    Explore at:
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Onurcan Köken; Bilal Wehbe; Bilal Wehbe; Onurcan Köken
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 2023
    Description
    Context
    This dataset is related to previously published public dataset "Sonar-to-RGB Image Translation for Diver Monitoring in Poor Visibility Environments". https://zenodo.org/records/7728089
    It contains ZED-right camera and sonar images collected from Hemmoor Lake and DFKI Maritime Exploration Hall.
    Sensors: Low Frq (1.2MHz) Blueprint Oculus M1200d Sonar and ZED Right Camera
    Content
    The dataset is created for Diver Detection and Diver Tracking applications.
    For the Diver Detection part, the dataset is prepared to train, validate and test YOLOv7 model.
    7095 images are used for training data, and 3095 images are used for validation data. These sets are augmented from originally captured and sampled ZED camera images. Augmentation methods are not applied to the Test data, which contains 822 images. Train and validation contain images from both the DFKI pool and Hemmor Lake, while the test data is only collected from the lake.
    To distinguish between the original image and the augmented image, check the name coding.
    Naming of object detection images:
    original_image_name.jpg
    if augmented:
    original_image_name_
    Object Detection Label Format:
    YOLO [(class), ((x_min + (x_max - x_min)/2) / image_width), ((y_min + (y_max - y_min)/2) / image_height), ((x_max - x_min) / image_width), ((y_max - y_min) / image_height)]
    Class: "diver", represented by "0" in object detection labels.
    Resolution of Object Detection Camera Images: 640x640
    Resolution of Object Tracking Camera Images: 1280x720
    Resolution of Object Tracking Low Frequency Sonar: 932x514
    About the Object Tracking on Sonar, the sampled data is the part where diver moves around the table and the platform.
    There are 4 cases shared in the dataset, which contain a sonar stream, and corresponding ZED-right camera images.
    Totally, 1193 points represent the diver on sonar images for the diver tracking application.
    For the tracking, "tracking_sonar_coordinates_
    And "image_sonar_
    Acknowledgements
    The data in this repository were collected as a joint effort between the German Center for Artificial Intelligence (DFKI), the German Federal Agency for technical Relief (THW), and Kraken Robotics GmbH. This work is part of the project DeeperSense that received funding from the European Commission. Program H2020-ICT-2020-2 ICT-47-2020 Project Number: 101016958.

  9. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
IMMmaster (2023). Object Detection Yolov7 New Labels 19.01.23 Dataset [Dataset]. https://universe.roboflow.com/immmaster/object-detection-yolov7-new-labels-19.01.23/dataset/10

Object Detection Yolov7 New Labels 19.01.23 Dataset

object-detection-yolov7-new-labels-19.01.23

object-detection-yolov7-new-labels-19.01.23-dataset

Explore at:
zipAvailable download formats
Dataset updated
Jan 27, 2023
Dataset authored and provided by
IMMmaster
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Damages Bounding Boxes
Description

Object Detection YOLOv7 New Labels 19.01.23

## Overview

Object Detection YOLOv7 New Labels 19.01.23 is a dataset for object detection tasks - it contains Damages annotations for 766 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu