9 datasets found
  1. Z

    Input and output datasets for training the AI-based vine segmentation model...

    • data.niaid.nih.gov
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marengo, iLaria; Sirsat, Manisha (2025). Input and output datasets for training the AI-based vine segmentation model (YOLOv9) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14605848
    Explore at:
    Dataset updated
    Jan 7, 2025
    Authors
    Marengo, iLaria; Sirsat, Manisha
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Description: This dataset encompasses the following data that should be used as INPUT for the segmentation model. These are stored in two distinct folders:

    1. Folder orthomosaics: RGB orthomosaics at six time points (2024-05-28, 2024-06-24, 2024-07-22, 2024-08-06, 2024-08-23, 2024-09-04). The orthomosaics have been warped, masked, and georeferenced to be overlapped to each other. 2. Folder cell_5x5metres: 751 vector files (format .gpkg) representing individual cells of 5X5 metres size for the vineyards AB01, AB02 and TR01 in Reynolds study area. These cells are used to mask the orthomosaics in order to "augment" the number of images as required by the AI-based model.

    The dataset encompasses as well examples of the OUTPUTS obtained from testing the AI-based segmentation model. These are stored in three distinct folders:

    1. Folder multi_vines: individual json files representing the segmented vines, generated from the yolo txt files.2. Folder merged_vines: vector shape files obtained by merging the single json file and representing all the segmented vines. 3. Folder vegetation_indices: Raster files (.TIF) representing vegetation indices (NDVI, GNDVI and NDRE) calculated at each segmented vine.

    Possible applications: the dataset can be used by anyone interested in testing and improving YOLOv9 model or other AI-based model for segmentation of individual vines or vine rows.

    Possible applications: the dataset can be used by anyone interested in testing and improving YOLOv9 model or other AI-based model for segmentation of individual vines or vine rows.

  2. HelmetViolations

    • kaggle.com
    zip
    Updated Dec 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Parisa Karimi Darabi (2024). HelmetViolations [Dataset]. https://www.kaggle.com/datasets/pkdarabi/helmet
    Explore at:
    zip(211756563 bytes)Available download formats
    Dataset updated
    Dec 12, 2024
    Authors
    Parisa Karimi Darabi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    HelmetViolations (2024-12-08)

    This dataset, HelmetViolations, focuses on identifying and classifying motorcycle riders based on helmet usage and detecting motorcycle license plates from a top-view perspective. Exported via Roboflow on December 8, 2024, this dataset is designed for YOLOv9-based object detection tasks. It is particularly valuable for projects aimed at improving road safety and enforcing helmet laws through automated systems.

    Dataset Overview

    • Total Images: 1,004 (including augmented versions)
    • Annotations: YOLOv9 format with three classes:
      • Plate
      • WithHelmet
      • WithoutHelmet
    • Pre-processing:
      • Images resized to 640x640 resolution.
      • Auto-orientation applied (with EXIF stripping).
      • Grayscale conversion (CRT phosphor effect).

    Augmentation Details

    To enhance diversity and improve model generalization, the following augmentations were applied to create 3 versions of each source image:
    - 50% probability of horizontal flip
    - Random rotation between -15° and +15°

    Data Splits

    • Training Set: 363 images (original + augmented).
    • Validation Set: 53 images.
    • Test Set: Included in the export for model evaluation.

    Use Cases

    This dataset is ideal for:
    - Helmet compliance monitoring systems.
    - License plate detection and recognition tasks.
    - General object detection research focusing on motorcycle-related scenarios.

    Roboflow Integration

    This dataset was created and managed using Roboflow, an end-to-end computer vision platform for dataset annotation, augmentation, and export.

  3. Thermal Solar PV Anomaly Detection Dataset

    • kaggle.com
    zip
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Parisa Karimi Darabi (2025). Thermal Solar PV Anomaly Detection Dataset [Dataset]. https://www.kaggle.com/datasets/pkdarabi/solarpanel
    Explore at:
    zip(383505596 bytes)Available download formats
    Dataset updated
    Apr 11, 2025
    Authors
    Parisa Karimi Darabi
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    📡 ThermoSolar-PV: A Curated Thermal Imagery Dataset for Anomaly Detection in Photovoltaic Modules

    By: P.K.Darabi
    🔗 ResearchGate Profile

    🔍 Abstract

    Automated fault detection in photovoltaic (PV) systems is a critical research area in modern energy science. This article introduces ThermoSolar-PV, a high-quality and curated thermal imagery dataset designed to advance the state of the art in AI-powered PV module diagnostics. The dataset is openly available to support research in computer vision, anomaly detection, and smart energy systems.

    🌞 Motivation

    With the global push toward clean energy, solar power plants are growing rapidly. However, fault detection in PV modules remains highly manual, expensive, and error-prone — especially across large installations. Thermal imagery captured via drones offers a scalable monitoring solution. When paired with deep learning, it enables powerful anomaly detection with minimal human supervision.

    Despite its potential, the research community still lacks a standardized, open-source dataset of thermal PV imagery. ThermoSolar-PV addresses this gap.

    📦 Dataset Overview

    • Images: 2,723 original thermal images
    • Annotations: 7,772 labeled objects
    • Format: YOLOv9 (TXT + image)
    • Classes: 8 distinct thermal anomalies
    • Augmented Samples: ~7,500 (flip, rotate, shear, brightness, hue, exposure)
    • Resolution: 640×640 (grayscale)

    🔍 Detected Anomalies:

    1. Single Hotspot
    2. Multi Hotspots
    3. Single Diode
    4. Multi Diode
    5. Single Bypassed Substring
    6. Multi Bypassed Substring
    7. String (Open Circuit)
    8. String (Reversed Polarity)

    🔧 Preprocessing Pipeline

    All images were preprocessed with: - Orientation correction via EXIF - Resizing to 640×640 - Grayscale normalization - Data augmentation to improve generalization

    Annotations were prepared using Roboflow and follow the YOLOv9 format, enabling plug-and-play usage for most modern object detection frameworks.

    🤖 End-to-End Project (with API)

    To demonstrate the dataset’s real-world applicability, we also developed a complete end-to-end anomaly detection pipeline using the YOLOv9 model, exposed via a RESTful API (FastAPI). This enables direct inference on new drone imagery, returning both bounding boxes and anomaly classes.

    Researchers can: - Train their models using the dataset - Benchmark against our baseline (YOLOv9, mAP@0.5 = 78%) - Reuse the trained model or API in downstream solar diagnostics tasks

    🧪 How to Use the Dataset

    You can download the dataset and pre-trained YOLOv9 model from:

    • 📄 Model & Code: Available on GitHub

    You are free to use this dataset in your research, publications, and real-world projects. If you use ThermoSolar-PV, please cite this article and credit:

    🔗 Related Project

    You can explore a full open-source implementation built upon this dataset, including model training, image preprocessing, YOLOv9 inference, and an API endpoint for real-time anomaly detection:

    📎 ThermalDetector – GitHub Repository

    This end-to-end project allows researchers and developers to plug the dataset directly into a real-world application pipeline.

    🧾 Citation & Research

    If you use this dataset in your research, academic writing, or published projects, please cite the related work on ResearchGate:

    📎 ResearchGate Project Link , DOI: 10.13140/RG.2.2.12595.54564

    This supports our mission of open collaboration and helps us track the academic impact of this contribution.

    📜 License

    MIT License — Free for academic and commercial use. Attribution to the author is kindly appreciated.

    ✉️ Contact & Collaboration

    For academic questions, implementation help, or collaborative research opportunities, feel free to reach out:

    • 📧 Email: P.K.Darai@gmail.com
    • 🌐 LinkedIn: LinkedIn

    ⚡ Empowering solar energy systems through artificial intelligence and thermal vision.

  4. f

    Comparison of YOLO-DSTD model with other models.

    • figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Comparison of YOLO-DSTD model with other models. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  5. f

    Comparison with other mainstream improved models.

    • figshare.com
    xls
    Updated Oct 28, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Comparison with other mainstream improved models. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  6. Precious Gemstone Identification

    • kaggle.com
    zip
    Updated Mar 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GauravKamath02 (2024). Precious Gemstone Identification [Dataset]. https://www.kaggle.com/datasets/gauravkamath02/precious-gemstone-identification
    Explore at:
    zip(7743109183 bytes)Available download formats
    Dataset updated
    Mar 28, 2024
    Authors
    GauravKamath02
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Precious Gemstone Identification

    Description: This comprehensive dataset comprises annotated images of a diverse range of precious gemstones meticulously curated for gemstone identification tasks. With 87 classes of gemstones for classification unique varieties including Chalcedony Blue, Amber, Aventurine Yellow, Dumortierite, Pearl, Aventurine Green, and many others, this dataset serves as a valuable resource for training and evaluating machine learning models in gemstone recognition.

    Gemstone Variety: The dataset encompasses a wide spectrum of precious gemstones, ranging from well-known varieties like Emerald, Ruby, Sapphire, and Diamond to lesser-known gems such as Benitoite, Larimar, and Sphene.

    Dataset Split: Train Set: 92% (46404 images) Validation Set: 4% (1932 images) Test Set: 4% (1932 images)

    Preprocessing: Images in the dataset have been preprocessed to ensure consistency and quality:

    • Auto-Orient: Applied to correct orientation inconsistencies.
    • Resize: Images are uniformly resized to 640x640 pixels.
    • Tiling: Organized into a grid of 3 rows x 2 columns for efficient processing.

    Augmentations: To enhance model robustness and generalization, each training example has been augmented with various transformations:

    • Flip: Horizontal and Vertical flips are applied.
    • Rotation: Random rotation between -15° and +15°.
    • Shear: Horizontal and Vertical shearing with a range of ±10°.
    • Saturation: Adjusted randomly between -15% and +15%.
    • Brightness: Random brightness adjustment between -10% and +10%.

    File Formats Available:

    • COCO Segmentation: COCO (Common Objects in Context) Segmentation format is commonly used for semantic segmentation tasks. It provides annotations for object segmentation, where each object instance is labeled with a mask indicating its outline.
    • COCO: COCO format is a widely used standard for object detection and instance segmentation tasks. It includes annotations for bounding boxes around objects, along with corresponding class labels and segmentation masks if applicable.
    • TensorFlow : TensorFlow format typically refers to a data format compatible with TensorFlow, a popular deep learning framework. It often includes annotations in a format suitable for training object detection and segmentation models using TensorFlow.
    • VOC: VOC (Visual Object Classes) format is a standard format for object detection and classification tasks. It includes annotations for bounding boxes around objects, along with class labels and metadata, following the PASCAL VOC dataset format.
    • YOLOv8-obb: YOLOv8-obb format is specific to the YOLO (You Only Look Once) object detection model architecture. It typically includes annotations for object bounding boxes in YOLO format, where each bounding box is defined by its center coordinates, width, height, and class label.
    • YOLOv9 Segmentation: YOLOv9 Segmentation format is tailored for semantic segmentation tasks using the YOLOv9 architecture. It provides annotations for pixel-wise segmentation masks corresponding to object instances, enabling accurate segmentation of objects in images.
    • Server Benchmark: The Server Benchmark format is used for annotated images with bounding boxes for object detection tasks. Each annotation entry in the JSON-like structure contains details about a specific object instance within an image.

    Disclaimer:

    The images included in this dataset were sourced from various online platforms, primarily from minerals.net and www.rasavgems.com websites, as well as other online datasets. We have curated and annotated these datasets for the purpose of gemstone identification and made them available in different formats. We do not claim ownership of the original images, and we do not claim to own these images. Any trademarks, logos, or copyrighted materials belong to their respective owners.

    Researchers, enthusiasts and developers interested in gemstone identification, machine learning, and computer vision applications will find this dataset invaluable for training and benchmarking gemstone recognition algorithms.

  7. f

    Test results for four types of defect detection.

    • figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Test results for four types of defect detection. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  8. Results of ablation experiment.

    • plos.figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Results of ablation experiment. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Traditional manual inspection approaches face challenges due to the reliance on the experience and alertness of operators, which limits their ability to meet the growing demands for efficiency and precision in modern manufacturing processes. Deep learning techniques, particularly in object detection, have shown significant promise for various applications. This paper proposes an improved YOLOv11-based method for surface defect detection in electronic products, aiming to address the limitations of existing YOLO models in handling complex backgrounds and small target defects. By introducing the MD-C2F module, DualConv module, and Inner_MPDIoU loss function, the improved YOLOv11 model has achieved significant improvements in precision, recall rate, detection speed, and other aspects. The improved YOLOv11 model demonstrates notable improvements in performance, with a precision increase from 90.9% to 93.1%, and a recall rate improvement from 77.0% to 84.6%. Furthermore, it shows a 4.6% rise in mAP50, from 84.0% to 88.6%. When compared to earlier YOLO versions such as YOLOv7, YOLOv8, and YOLOv9, the improved YOLOv11 achieves a significantly higher precision of 89.3% in resistor detection, surpassing YOLOv7’s 54.3% and YOLOv9’s 88.0%. In detecting defects like LED lights and capacitors, the improved YOLOv11 reaches mAP50 values of 77.8% and 85.3%, respectively, both outperforming the other models. Additionally, in the generalization tests conducted on the PKU-Market-PCB dataset, the model’s detection accuracy improved from 91.4% to 94.6%, recall from 82.2% to 91.2%, and mAP50 from 91.8% to 95.4%.These findings emphasize that the proposed YOLOv11 model successfully tackles the challenges of detecting small defects in complex backgrounds and across varying scales. It significantly enhances detection accuracy, recall, and generalization ability, offering a dependable automated solution for defect detection in electronic product manufacturing.

  9. Comparison of MD-C2F module with other channel and spatial attention fusion...

    • plos.figshare.com
    xls
    Updated Oct 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain (2025). Comparison of MD-C2F module with other channel and spatial attention fusion modules. [Dataset]. http://doi.org/10.1371/journal.pone.0334333.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 28, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jianming Meng; Longjian Guo; Wei Hao; Deepak Kumar Jain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of MD-C2F module with other channel and spatial attention fusion modules.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Marengo, iLaria; Sirsat, Manisha (2025). Input and output datasets for training the AI-based vine segmentation model (YOLOv9) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14605848

Input and output datasets for training the AI-based vine segmentation model (YOLOv9)

Explore at:
Dataset updated
Jan 7, 2025
Authors
Marengo, iLaria; Sirsat, Manisha
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

Description: This dataset encompasses the following data that should be used as INPUT for the segmentation model. These are stored in two distinct folders:

  1. Folder orthomosaics: RGB orthomosaics at six time points (2024-05-28, 2024-06-24, 2024-07-22, 2024-08-06, 2024-08-23, 2024-09-04). The orthomosaics have been warped, masked, and georeferenced to be overlapped to each other. 2. Folder cell_5x5metres: 751 vector files (format .gpkg) representing individual cells of 5X5 metres size for the vineyards AB01, AB02 and TR01 in Reynolds study area. These cells are used to mask the orthomosaics in order to "augment" the number of images as required by the AI-based model.

The dataset encompasses as well examples of the OUTPUTS obtained from testing the AI-based segmentation model. These are stored in three distinct folders:

  1. Folder multi_vines: individual json files representing the segmented vines, generated from the yolo txt files.2. Folder merged_vines: vector shape files obtained by merging the single json file and representing all the segmented vines. 3. Folder vegetation_indices: Raster files (.TIF) representing vegetation indices (NDVI, GNDVI and NDRE) calculated at each segmented vine.

Possible applications: the dataset can be used by anyone interested in testing and improving YOLOv9 model or other AI-based model for segmentation of individual vines or vine rows.

Possible applications: the dataset can be used by anyone interested in testing and improving YOLOv9 model or other AI-based model for segmentation of individual vines or vine rows.

Search
Clear search
Close search
Google apps
Main menu