16 datasets found
  1. R

    Yolov7 For Ppe Detection Dataset

    • universe.roboflow.com
    zip
    Updated Mar 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MMAI894College (2023). Yolov7 For Ppe Detection Dataset [Dataset]. https://universe.roboflow.com/mmai894college/yolov7-for-ppe-detection/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 30, 2023
    Dataset authored and provided by
    MMAI894College
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Ppe Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Construction Site Safety Monitoring: The "yolov7 for ppe detection" model can be used in construction sites to ensure that all workers are wearing the necessary personal protective equipment (PPE). The model can alert safety officers when someone is detected without proper PPE.

    2. Factory Compliance Checking: Manufacturers can use this computer vision model to ensure all factory workers are complying with PPE rules. The model can detect workers not wearing necessary equipment, providing evidence for potential OSHA violations.

    3. Training Simulations: The model can be employed in virtual reality or augmented reality for training simulations. Here, trainees would be alerted whenever they fail to "wear" appropriate PPE in the simulation, teaching them the importance of safety procedures.

    4. Emergency Situations Analysis: In disaster response or accident investigation, it can be used to assess if rescue and emergency medical teams followed safety protocols by wearing the correct PPE, by analyzing photos or video footage.

    5. Insurances and Litigations: The CV model could prove invaluable in personal injury cases, where it can be used to establish, via media evidence, whether a worker was wearing proper PPE at the time of an accident, potentially affecting compensations or liability claims.

  2. Keyboards detection dataset

    • kaggle.com
    zip
    Updated Jan 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LorencJan (2023). Keyboards detection dataset [Dataset]. https://www.kaggle.com/lorencjan/keyboards-detection-dataset
    Explore at:
    zip(42823053648 bytes)Available download formats
    Dataset updated
    Jan 8, 2023
    Authors
    LorencJan
    License

    http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

    Description

    For my thesis, I need to recognize keyboards in an image. I got 615 images of keyboards from internet search engines and personal devices, split those into train-val-test (70:20:10) and augmented the train ones into 20k images. A simplified and resized version for YOLOv7 training can be found here. For my next task of recognizing keys on the keyboard I use Characters detection dataset.

  3. m

    Segmented Dataset Based on YOLOv7 for Drone vs. Bird Identification for Deep...

    • data.mendeley.com
    Updated Feb 20, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aditya Srivastav (2023). Segmented Dataset Based on YOLOv7 for Drone vs. Bird Identification for Deep and Machine Learning Algorithms [Dataset]. http://doi.org/10.17632/6ghdz52pd7.3
    Explore at:
    Dataset updated
    Feb 20, 2023
    Authors
    Aditya Srivastav
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Unmanned aerial vehicles (UAVs) have become increasingly popular in recent years for both commercial and recreational purposes. Regrettably, the security of people and infrastructure is also clearly threatened by this increased demand. To address the current security challenge, much research has been carried out and several innovations have been made. Many faults still exist, however, including type or range detection failures and the mistaken identification of other airborne objects (for example, birds). A standard dataset that contains photos of drones and birds and on which the model might be trained for greater accuracy is needed to conduct experiments in this field. The supplied dataset is crucial since it will help train the model, giving it the ability to learn more accurately and make better decisions. The dataset that is being presented is comprised of a diverse range of images of birds and drones in motion. Pexel website's images and videos have been used to construct the dataset. Images were obtained from the frames of the recordings that were acquired, after which they were segmented and augmented with a range of circumstances. This would improve the machine-learning model's detection accuracy while increasing dataset training. The dataset has been formatted according to the YOLOv7 PyTorch specification. The test, train, and valid folders are contained within the given dataset. These folders each feature a plaintext file that corresponds to an associated image. Relevant metadata regarding the discovered object is described in the plaintext file. Images and labels are the two subfolders that constitute the folders. The collection consists of 20,925 images of birds and drones. The images have a 640 x 640 pixel resolution and are stored in JPEG format.

  4. Incorrect masked face YOLOv7 format

    • kaggle.com
    zip
    Updated Jul 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FrankShi9 (2022). Incorrect masked face YOLOv7 format [Dataset]. https://www.kaggle.com/datasets/frankshi9/incorrect-masked-face-yolov7-format
    Explore at:
    zip(102196254 bytes)Available download formats
    Dataset updated
    Jul 15, 2022
    Authors
    FrankShi9
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Subset from MaskedFace Net

    PREPROCESSING Auto-Orient: Applied Resize: Stretch to 416x416

    AUGMENTATIONS Outputs per training example: 3 Flip: Horizontal, Vertical Shear: ±15° Horizontal, ±15° Vertical Noise: Up to 5% of pixels Cutout: 3 boxes with 10% size each Bounding Box: Blur: Up to 10px Bounding Box: Noise: Up to 5% of pixels

  5. f

    DataSheet_1_Multi-scenario pear tree inflorescence detection based on...

    • frontiersin.figshare.com
    xlsx
    Updated Jan 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhen Zhang; Xiaohui Lei; Kai Huang; Yuanhao Sun; Jin Zeng; Tao Xyu; Quanchun Yuan; Yannan Qi; Andreas Herbst; Xiaolan Lyu (2024). DataSheet_1_Multi-scenario pear tree inflorescence detection based on improved YOLOv7 object detection algorithm.xlsx [Dataset]. http://doi.org/10.3389/fpls.2023.1330141.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jan 22, 2024
    Dataset provided by
    Frontiers
    Authors
    Zhen Zhang; Xiaohui Lei; Kai Huang; Yuanhao Sun; Jin Zeng; Tao Xyu; Quanchun Yuan; Yannan Qi; Andreas Herbst; Xiaolan Lyu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Efficient and precise thinning during the orchard blossom period is a crucial factor in enhancing both fruit yield and quality. The accurate recognition of inflorescence is the cornerstone of intelligent blossom equipment. To advance the process of intelligent blossom thinning, this paper addresses the issue of suboptimal performance of current inflorescence recognition algorithms in detecting dense inflorescence at a long distance. It introduces an inflorescence recognition algorithm, YOLOv7-E, based on the YOLOv7 neural network model. YOLOv7 incorporates an efficient multi-scale attention mechanism (EMA) to enable cross-channel feature interaction through parallel processing strategies, thereby maximizing the retention of pixel-level features and positional information on the feature maps. Additionally, the SPPCSPC module is optimized to preserve target area features as much as possible under different receptive fields, and the Soft-NMS algorithm is employed to reduce the likelihood of missing detections in overlapping regions. The model is trained on a diverse dataset collected from real-world field settings. Upon validation, the improved YOLOv7-E object detection algorithm achieves an average precision and recall of 91.4% and 89.8%, respectively, in inflorescence detection under various time periods, distances, and weather conditions. The detection time for a single image is 80.9 ms, and the model size is 37.6 Mb. In comparison to the original YOLOv7 algorithm, it boasts a 4.9% increase in detection accuracy and a 5.3% improvement in recall rate, with a mere 1.8% increase in model parameters. The YOLOv7-E object detection algorithm presented in this study enables precise inflorescence detection and localization across an entire tree at varying distances, offering robust technical support for differentiated and precise blossom thinning operations by thinning machinery in the future.

  6. Z

    Multi-Altitude Aerial Vehicles Dataset

    • data.niaid.nih.gov
    • data.europa.eu
    Updated Apr 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rafael Makrigiorgis; Christos Kyrkou; Panayiotis Kolios (2023). Multi-Altitude Aerial Vehicles Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7736335
    Explore at:
    Dataset updated
    Apr 5, 2023
    Dataset provided by
    KIOS Research and Innovation Center of Excellence, University of Cyprus
    Authors
    Rafael Makrigiorgis; Christos Kyrkou; Panayiotis Kolios
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Custom Multi-Altitude Aerial Vehicles Dataset:

    Created for publishing results for ICUAS 2023 paper "How High can you Detect? Improved accuracy and efficiency at varying altitudes for Aerial Vehicle Detection", following the abstract of the paper.

    Abstract—Object detection in aerial images is a challenging task mainly because of two factors, the objects of interest being really small, e.g. people or vehicles, making them indistinguishable from the background; and the features of objects being quite different at various altitudes. Especially, when utilizing Unmanned Aerial Vehicles (UAVs) to capture footage, the need for increased altitude to capture a larger field of view is quite high. In this paper, we investigate how to find the best solution for detecting vehicles in various altitudes, while utilizing a single CNN model. The conditions for choosing the best solution are the following; higher accuracy for most of the altitudes and real-time processing ( > 20 Frames per second (FPS) ) on an Nvidia Jetson Xavier NX embedded device. We collected footage of moving vehicles from altitudes of 50-500 meters with a 50-meter interval, including a roundabout and rooftop objects as noise for high altitude challenges. Then, a YoloV7 model was trained on each dataset of each altitude along with a dataset including all the images from all the altitudes. Finally, by conducting several training and evaluation experiments and image resizes we have chosen the best method of training objects on multiple altitudes to be the mixup dataset with all the altitudes, trained on a higher image size resolution, and then performing the detection using a smaller image resize to reduce the inference performance. The main results

    The creation of a custom dataset was necessary for altitude evaluation as no other datasets were available. To fulfill the requirements, the footage was captured using a small UAV hovering above a roundabout near the University of Cyprus campus, where several structures and buildings with solar panels and water tanks were visible at varying altitudes. The data were captured during a sunny day, ensuring bright and shadowless images. Images were extracted from the footage, and all data were annotated with a single class labeled as 'Car'. The dataset covered altitudes ranging from 50 to 500 meters with a 50-meter step, and all images were kept at their original high resolution of 3840x2160, presenting challenges for object detection. The data were split into 3 sets for training, validation, and testing, with the number of vehicles increasing as altitude increased, which was expected due to the larger field of view of the camera. Each folder consists of an aerial vehicle dataset captured at the corresponding altitude. For each altitude, the dataset annotations are generated in YOLO, COCO, and VOC formats. The dataset consists of the following images and detection objects:

        Data
        Subset
        Images
        Cars
    
    
        50m
        Train
        130
        269
    
    
        50m
        Test
        32
        66
    
    
        50m
        Valid
        33
        73
    
    
        100m
        Train
        246
        937
    
    
        100m
        Test
        61
        226
    
    
        100m
        Valid
        62
        250
    
    
        150m
        Train
        244
        1691
    
    
        150m
        Test
        61
        453
    
    
        150m
        Valid
        61
        426
    
    
        200m
        Train
        246
        1753
    
    
        200m
        Test
        61
        445
    
    
        200m
        Valid
        62
        424
    
    
        250m
        Train
        245
        3326
    
    
        250m
        Test
        61
        821
    
    
        250m
        Valid
        61
        823
    
    
        300m
        Train
        246
        6250
    
    
        300m
        Test
        61
        1553
    
    
        300m
        Valid
        62
        1585
    
    
        350m
        Train
        246
        10741
    
    
        350m
        Test
        61
        2591
    
    
        350m
        Valid
        62
        2687
    
    
        400m
        Train
        245
        20072
    
    
        400m
        Test
        61
        4974
    
    
        400m
        Valid
        61
        4924
    
    
        450m
        Train
        246
        31794
    
    
        450m
        Test
        61
        7887
    
    
        450m
        Valid
        61
        7880
    
    
        500m
        Train
        270
        49782
    
    
        500m
        Test
        67
        12426
    
    
        500m
        Valid
        68
        12541
    
    
        mix_alt
        Train
        2364
        126615
    
    
        mix_alt
        Test
        587
        31442
    
    
        mix_alt
        Valid
        593
        31613
    

    It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).

  7. Fusion of Underwater Camera and Multibeam Sonar for Diver Detection and...

    • data.europa.eu
    unknown
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo, Fusion of Underwater Camera and Multibeam Sonar for Diver Detection and Tracking [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-10220989?locale=lv
    Explore at:
    unknown(279657200)Available download formats
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Context This dataset is related to previously published public dataset "Sonar-to-RGB Image Translation for Diver Monitoring in Poor Visibility Environments". https://zenodo.org/records/7728089 It contains ZED-right camera and sonar images collected from Hemmoor Lake and DFKI Maritime Exploration Hall. Sensors: Low Frq (1.2MHz) Blueprint Oculus M1200d Sonar and ZED Right Camera Content The dataset is created for Diver Detection and Diver Tracking applications. For the Diver Detection part, the dataset is prepared to train, validate and test YOLOv7 model. 7095 images are used for training data, and 3095 images are used for validation data. These sets are augmented from originally captured and sampled ZED camera images. Augmentation methods are not applied to the Test data, which contains 822 images. Train and validation contain images from both the DFKI pool and Hemmor Lake, while the test data is only collected from the lake. To distinguish between the original image and the augmented image, check the name coding. Naming of object detection images: original_image_name.jpg if augmented: original_image_name_

  8. R

    Spit Bridge Cones Dataset

    • universe.roboflow.com
    zip
    Updated Apr 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yolov7 (2023). Spit Bridge Cones Dataset [Dataset]. https://universe.roboflow.com/yolov7-1cb06/spit-bridge-cones/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 8, 2023
    Dataset authored and provided by
    Yolov7
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cones Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Traffic Management Systems: The "Spit Bridge Cones" model can be used to detect cones and markers on roads or construction sites, providing real-time data for improving traffic flow and maintaining safety standards.

    2. Drone Navigation: Drones could use this model to identify marked cones on terrain during aerial mapping or surveying, making flight navigation more accurate and reliable.

    3. Autonomous Vehicles: This model can be employed by autonomous driving solutions to recognize and react to cones and markers on the road, improving their ability to navigate safely.

    4. Road Maintenance Detection: Authorities and road management teams can use the model to identify cones that mark road hazards or maintenance work, providing important information for planning road repair operations.

    5. AR Games Development: Developers of augmented reality driving- or navigation-based games could use the "Spit Bridge Cones" model to map real-world cone locations into the game environment for enhanced realism.

  9. Experimental results of YOLOv8+WIOU.

    • plos.figshare.com
    xls
    Updated Mar 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meiling Shi; Dongling Zheng; Tianhao Wu; Wenjing Zhang; Ruijie Fu; Kailiang Huang (2024). Experimental results of YOLOv8+WIOU. [Dataset]. http://doi.org/10.1371/journal.pone.0299902.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Meiling Shi; Dongling Zheng; Tianhao Wu; Wenjing Zhang; Ruijie Fu; Kailiang Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accurate identification of small tea buds is a key technology for tea harvesting robots, which directly affects tea quality and yield. However, due to the complexity of the tea plantation environment and the diversity of tea buds, accurate identification remains an enormous challenge. Current methods based on traditional image processing and machine learning fail to effectively extract subtle features and morphology of small tea buds, resulting in low accuracy and robustness. To achieve accurate identification, this paper proposes a small object detection algorithm called STF-YOLO (Small Target Detection with Swin Transformer and Focused YOLO), which integrates the Swin Transformer module and the YOLOv8 network to improve the detection ability of small objects. The Swin Transformer module extracts visual features based on a self-attention mechanism, which captures global and local context information of small objects to enhance feature representation. The YOLOv8 network is an object detector based on deep convolutional neural networks, offering high speed and precision. Based on the YOLOv8 network, modules including Focus and Depthwise Convolution are introduced to reduce computation and parameters, increase receptive field and feature channels, and improve feature fusion and transmission. Additionally, the Wise Intersection over Union loss is utilized to optimize the network. Experiments conducted on a self-created dataset of tea buds demonstrate that the STF-YOLO model achieves outstanding results, with an accuracy of 91.5% and a mean Average Precision of 89.4%. These results are significantly better than other detectors. Results show that, compared to mainstream algorithms (YOLOv8, YOLOv7, YOLOv5, and YOLOx), the model improves accuracy and F1 score by 5-20.22 percentage points and 0.03-0.13, respectively, proving its effectiveness in enhancing small object detection performance. This research provides technical means for the accurate identification of small tea buds in complex environments and offers insights into small object detection. Future research can further optimize model structures and parameters for more scenarios and tasks, as well as explore data augmentation and model fusion methods to improve generalization ability and robustness.

  10. Bird vs Drone

    • kaggle.com
    zip
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Locked_in_hell (2025). Bird vs Drone [Dataset]. https://www.kaggle.com/datasets/stealthknight/bird-vs-drone
    Explore at:
    zip(1129073439 bytes)Available download formats
    Dataset updated
    Feb 24, 2025
    Authors
    Locked_in_hell
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    YOLO-based Segmented Dataset for Drone vs. Bird Detection for Deep and Machine Learning Algorithms

    Unmanned aerial vehicles (UAVs), or drones, have witnessed a sharp rise in both commercial and recreational use, but this surge has brought about significant security concerns. Drones, when misidentified or undetected, can pose risks to people, infrastructure, and air traffic, especially when confused with other airborne objects, such as birds. To overcome this challenge, accurate detection systems are essential. However, a reliable dataset for distinguishing between drones and birds has been lacking, hindering the progress of effective models in this field.

    This dataset is designed to fill this gap, enabling the development and fine-tuning of models to better identify drones and birds in various environments. The dataset comprises a diverse collection of images, sourced from Pexel’s website, representing birds and drones in motion. These images were captured from video frames and are segmented, augmented, and pre-processed to simulate different environmental conditions, enhancing the model's training process.

    Formatted in accordance with the YOLOv7 PyTorch specification, the dataset is organized into three folders: Test, Train, and Valid. Each folder contains two sub-folders—*Images* and Labels—with the Labels folder including the associated metadata in plaintext format. This metadata provides valuable information about the detected objects within each image, allowing the model to accurately learn and detect drones and birds in varying circumstances. The dataset contains a total of 20,925 images, all with a resolution of 640 x 640 pixels in JPEG format, providing comprehensive training and validation opportunities for machine learning models.

    • Test Folder: Contains 889 images (both drone and bird images). The folder has sub-categories marked as BT (Bird Test Images) and DT (Drone Test Images).

    • Train Folder: With a total of 18,323 images, this folder includes both drone and bird images, also categorized as BT and DT.

    • Valid Folder: Consisting of 1,740 images, the images in this folder are similarly categorized into BT and DT.

    This dataset is essential for training more accurate models that can differentiate between drones and birds in real-time applications, thereby improving the reliability of drone detection systems for enhanced security and efficiency.

  11. Counter Strike 2 Body and Head Classification

    • kaggle.com
    zip
    Updated Jan 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ömer Faruk Günaydın (2024). Counter Strike 2 Body and Head Classification [Dataset]. https://www.kaggle.com/datasets/merfarukgnaydn/counter-strike-2-body-and-head-classification
    Explore at:
    zip(1478346482 bytes)Available download formats
    Dataset updated
    Jan 7, 2024
    Authors
    Ömer Faruk Günaydın
    Description

    https://github.com/siromermer/CS2-CSGO-Yolov8-Yolov7-ObjectDetection

    1. ct_body
    2. ct_head
    3. t_body
    4. t_head

    in .yaml file there are 5 classes , but actual class number is 4 . When annotating images i mistakenly give blank space in classes.txt file and because of that there is empty class which is 0 in this case , but it wont create any problem , i just wanted to inform kaggle users . Dataset is little bit small for now , but as soos as possible i will increase image number for sure

  12. f

    Comparison of YOLO-DSTD model with other models.

    • figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Comparison of YOLO-DSTD model with other models. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  13. f

    Comparison with other mainstream improved models.

    • figshare.com
    xls
    Updated Oct 28, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Comparison with other mainstream improved models. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  14. f

    Test results for four types of defect detection.

    • figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Test results for four types of defect detection. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  15. Results of ablation experiment.

    • plos.figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Results of ablation experiment. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  16. Summary of the fish detection dataset.

    • plos.figshare.com
    xls
    Updated Jun 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hassaan Malik; Ahmad Naeem; Shahzad Hassan; Farman Ali; Rizwan Ali Naqvi; Dong Keon Yon (2023). Summary of the fish detection dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0284992.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 17, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Hassaan Malik; Ahmad Naeem; Shahzad Hassan; Farman Ali; Rizwan Ali Naqvi; Dong Keon Yon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Regular monitoring of the number of various fish species in a variety of habitats is essential for marine conservation efforts and marine biology research. To address the shortcomings of existing manual underwater video fish sampling methods, a plethora of computer-based techniques are proposed. However, there is no perfect approach for the automated identification and categorizing of fish species. This is primarily due to the difficulties inherent in capturing underwater videos, such as ambient changes in luminance, fish camouflage, dynamic environments, watercolor, poor resolution, shape variation of moving fish, and tiny differences between certain fish species. This study has proposed a novel Fish Detection Network (FD_Net) for the detection of nine different types of fish species using a camera-captured image that is based on the improved YOLOv7 algorithm by exchanging Darknet53 for MobileNetv3 and depthwise separable convolution for 3 x 3 filter size in the augmented feature extraction network bottleneck attention module (BNAM). The mean average precision (mAP) is 14.29% higher than it was in the initial version of YOLOv7. The network that is utilized in the method for the extraction of features is an improved version of DenseNet-169, and the loss function is an Arcface Loss. Widening the receptive field and improving the capability of feature extraction are achieved by incorporating dilated convolution into the dense block, removing the max-pooling layer from the trunk, and incorporating the BNAM into the dense block of the DenseNet-169 neural network. The results of several experiments comparisons and ablation experiments demonstrate that our proposed FD_Net has a higher detection mAP than YOLOv3, YOLOv3-TL, YOLOv3-BL, YOLOv4, YOLOv5, Faster-RCNN, and the most recent YOLOv7 model, and is more accurate for target fish species detection tasks in complex environments.

  17. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
MMAI894College (2023). Yolov7 For Ppe Detection Dataset [Dataset]. https://universe.roboflow.com/mmai894college/yolov7-for-ppe-detection/model/2

Yolov7 For Ppe Detection Dataset

yolov7-for-ppe-detection

yolov7-for-ppe-detection-dataset

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Mar 30, 2023
Dataset authored and provided by
MMAI894College
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Ppe Bounding Boxes
Description

Here are a few use cases for this project:

  1. Construction Site Safety Monitoring: The "yolov7 for ppe detection" model can be used in construction sites to ensure that all workers are wearing the necessary personal protective equipment (PPE). The model can alert safety officers when someone is detected without proper PPE.

  2. Factory Compliance Checking: Manufacturers can use this computer vision model to ensure all factory workers are complying with PPE rules. The model can detect workers not wearing necessary equipment, providing evidence for potential OSHA violations.

  3. Training Simulations: The model can be employed in virtual reality or augmented reality for training simulations. Here, trainees would be alerted whenever they fail to "wear" appropriate PPE in the simulation, teaching them the importance of safety procedures.

  4. Emergency Situations Analysis: In disaster response or accident investigation, it can be used to assess if rescue and emergency medical teams followed safety protocols by wearing the correct PPE, by analyzing photos or video footage.

  5. Insurances and Litigations: The CV model could prove invaluable in personal injury cases, where it can be used to establish, via media evidence, whether a worker was wearing proper PPE at the time of an accident, potentially affecting compensations or liability claims.

Search
Clear search
Close search
Google apps
Main menu