15 datasets found
  1. h

    fencing-scoreboard-yolov8

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mike Stefanov, fencing-scoreboard-yolov8 [Dataset]. https://huggingface.co/datasets/mastefan/fencing-scoreboard-yolov8
    Explore at:
    Authors
    Mike Stefanov
    Description

    FENCING SCOREBOARD DATASET (YOLOv8 FORMAT)

    Project: CMU Fencing Classification Project Author: Michael Stefanov (Carnegie Mellon University) License: MIT Date: 2025

      Description:
    

    Labeled images of fencing scoreboards in lit and unlit states, used to train the YOLOv8 detection model. Includes augmented samples and negatives for robust learning.

      Dataset Summary:
    

    Total Images: ~2000 Splits: train (1600), valid (400) Classes: 1 ("scoreboard") Format: YOLOv8… See the full description on the dataset page: https://huggingface.co/datasets/mastefan/fencing-scoreboard-yolov8.

  2. Comparison between the improved model and the original YOLOv8 classification...

    • plos.figshare.com
    xls
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia (2025). Comparison between the improved model and the original YOLOv8 classification model. [Dataset]. http://doi.org/10.1371/journal.pone.0331011.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 7, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison between the improved model and the original YOLOv8 classification model.

  3. R

    Leukemia Classification Dataset

    • universe.roboflow.com
    zip
    Updated May 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zaky Ashari (2025). Leukemia Classification Dataset [Dataset]. https://universe.roboflow.com/zaky-ashari-qeoml/leukemia-classification-qffot
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2025
    Dataset authored and provided by
    Zaky Ashari
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Acute Lymphocytic Leukemia ALL
    Description

    Project Overview – Automated B-ALL Cell Classification Pipeline

    1. Motivation

    Manual differentiation of benign hematogones vs. B-ALL malignant sub-types (Pre-B, Pro-B) on bone-marrow smears is time-consuming and error-prone. We built an end-to-end computer-vision workflow that segments single cells, balances class counts, and benchmarks seven modern CNN / YOLO classifiers.

    2. Data & Pre-processing

    StepPurposeKey details
    Explorationquantify imbalance3 242 images, 4 original classes (benign 512, pre-B 955, pro-B 796, early-pre-B 979)
    Segmentationisolate blast regionsLAB-A-channel → K-means (k = 2) → morphology → size ≥ 500 px
    Standardisationmodel-ready tensors224 × 224 resize → float [0–1] normalisation
    Pairing & Splittingdual-input & leakage-free splitoriginal + mask pairs, 85 / 10 / 5 % (train/val/test)
    Augmentationbalance classesflips to 811 pairs per class (training only)

    3. Model Zoo

    • Mask-only single-stream CNNs
      • EfficientNet-B0, MobileNetV2, NASNet-Mobile
    • Dual-channel CNN
      • MobileNetV2 backbone shared for original + mask
    • Ultralytics YOLO classifiers
      • v8-n, v11-n (C3k2 + C2PSA), v12-n (A2C2f attention)

    All CNN backbones are frozen; heads trained for 30 epochs, LR = 1 e-3, batch = 32.

    4. Final Test Performance

    ModelTop-1 AccParamsInput
    YOLOv8-n100 %1.82 Mmask
    YOLOv11-n100 %1.63 Mmask
    YOLOv12-n100 %1.82 Mmask
    MobileNetV299.1 %2.59 M (0.33 M trainable)mask
    Dual-channel MobileNetV299.1 %3.70 M (1.44 M trainable)ori + mask
    NASNet-Mobile96.4 %4.54 M (0.27 M trainable)mask
    EfficientNet-B035.1 %4.38 M (0.33 M trainable)mask

    Best performer: YOLOv8-n – 100 % accuracy with lightweight 1.8 M parameters.

    5. Quick sanity test (3 unseen slides)

    • YOLO family classified all three samples correctly (conf > 0.91).
    • MobileNetV2 variants matched YOLO except for one pre-B false negative by EfficientNet-B0.

    6. Take-aways & Next Steps

    • Segmentation + mask-only input already captures nearly all diagnostic cues – dual-channel adds no gain.
    • YOLOv11-n offers the best accuracy-to-parameter ratio → good for edge devices.
    • To strengthen clinical validity we will
      1. split data at patient level,
      2. add richer augmentations (rotations, brightness, elastic),
      3. fine-tune last backbone blocks,
      4. provide Grad-CAM heat-maps for explainability,
      5. re-evaluate on a multi-centre external cohort.

    Bottom line: A compact YOLO classifier, fed only segmented cell masks, can achieve perfect subtype recognition on our internal dataset—promising for real-time, point-of-care B-ALL decision support.

  4. Scoliosis X-ray Dataset (YOLOv5 Format) disks

    • kaggle.com
    zip
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Salman (2025). Scoliosis X-ray Dataset (YOLOv5 Format) disks [Dataset]. https://www.kaggle.com/datasets/salmankey/scoliosis-x-ray-dataset-yolov5-format-disks
    Explore at:
    zip(236170694 bytes)Available download formats
    Dataset updated
    Nov 7, 2025
    Authors
    Muhammad Salman
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    🩻 Scoliosis Spine Detection Dataset (YOLOv5 Ready)

    This dataset is a curated and preprocessed version of a Scoliosis Spine X-ray dataset, designed specifically for deep learning–based object detection and classification tasks using frameworks like YOLOv5, YOLOv8, and TensorFlow Object Detection API.

    It contains annotated spinal X-ray images categorized into three classes, representing different spinal conditions.

    🧩 Dataset Configuration

    train: scoliosis2.v16i.tensorflow/images/train
    val: scoliosis2.v16i.tensorflow/images/valid
    test: scoliosis2.v16i.tensorflow/images/test
    
    nc: 3
    names: ['Vertebra', 'scoliosis spine', 'normal spine']
    

    ⚙️ Data Details

    • Train Set: /images/train
    • Validation Set: /images/valid
    • Test Set: /images/test
    • Total Classes: 3
    • Annotations: YOLO format (.txt files with class, x_center, y_center, width, height)
    • Image Format: .jpg / .png

    Classes Description:

    1. Vertebra — Labeled vertebral regions used for bone localization.
    2. Scoliosis Spine — X-rays showing curvature or deformity in the spinal structure.
    3. Normal Spine — Healthy, straight spinal alignment without scoliosis signs.

    🧠 Augmentations Applied

    To enhance diversity and model robustness, the dataset was augmented using:

    • Rotation
    • Brightness and contrast adjustment
    • Horizontal flip
    • Random zoom and cropping
    • Gaussian noise

    🎯 Use Cases

    This dataset is ideal for:

    • Scoliosis detection and classification research
    • Vertebra localization and spine anomaly detection
    • Medical object detection experiments (YOLOv5, YOLOv8, EfficientDet)
    • Transfer learning on medical X-ray datasets
    • Explainable AI and model comparison studies

    📊 Source

    The dataset was preprocessed and labeled using Roboflow, then manually refined and balanced for research use. Originally derived from a spinal X-ray dataset and adapted for deep learning object detection.

    Roboflow Project Link: 🔗 View on Roboflow (add your Roboflow link here)

    🧾 License

    CC BY 4.0 — Free to use, modify, and share with attribution.

    Would you like me to make a short summary version (under 1000 characters) for the “Short Description” field on Kaggle too? It’s required for the dataset card.

  5. Pseudocode of the CPPDE-YOLO Model Workflow.

    • plos.figshare.com
    xls
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia (2025). Pseudocode of the CPPDE-YOLO Model Workflow. [Dataset]. http://doi.org/10.1371/journal.pone.0331011.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 7, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Book covers typically contain a wealth of information. With the annual increase in the number of books published, deep learning has been utilised to achieve automatic identification and classification of book covers. This approach overcomes the inefficiency of traditional manual classification operations and enhances the management efficiency of modern book retrieval systems. In the realm of computer vision, the YOLO algorithm has garnered significant attention owing to its excellent performance across various visual tasks. Therefore, this study introduces the CPPDE-YOLO model, a novel dual-convolution adaptive focus neural network that integrates the PConv and PWConv operators, alongside dynamic sampling technology and efficient multi-scale attention. By incorporating specific enhancement features, the original YOLOv8 framework has been optimised to yield superior performance in book cover classification. The aim of this model is to significantly enhance the accuracy of image classification by refining the algorithm. For effective book cover classification, it is imperative to consider complex global feature information to capture intricate features while managing computational costs. To address this, we propose a hybrid model that integrates parallel convolution and point-by-point convolution within the backbone network, integrating it into the DualConv framework to capture complex feature information. Moreover, we integrate the efficient multi-scale attention mechanism into each cross stage partial network fusion residual block in the head section to focus on learning key features for more precise classification. The dynamic sampling method is employed instead of the traditional UPsample method to overcome its inherent limitations. Finally, experimental results on real datasets validate the performance enhancement of our proposed CPPDE-YOLO network structure compared to the original YOLOv8 classification structure, achieving Top_1 Accuracy and Top_5 Accuracy improvement of 1.1% and 1.0%, respectively. This underscores the effectiveness of our proposed algorithm in enhancing book genre classification.

  6. EyeOnWater training dataset for assessing the inclusion of water images

    • zenodo.org
    • data.europa.eu
    zip
    Updated Mar 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tjerk Krijger; Tjerk Krijger (2024). EyeOnWater training dataset for assessing the inclusion of water images [Dataset]. http://doi.org/10.5281/zenodo.10777441
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tjerk Krijger; Tjerk Krijger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 1, 2024
    Description

    Training dataset

    The EyeOnWater app is designed to assess the ocean's water quality using images captured by regular citizens. In order to have an extra helping hand in determining whether an image meets the criteria for inclusion in the app, the YOLOv8 model for image classification is employed. With the help of this model all uploaded pictures are assessed. If the model deems a water image unsuitable, it is excluded from the app's online database. In order to train this model a training dataset containing a large pool of different images is required. The training dataset includes 12,357 'good' and 10,019 'bad' water quality images that were submitted to the EyeOnWater app.

    Technical details

    Data preprocessing

    In order to create a larger training dataset the set of original images (containing a total of 1700 images) are augmented, by rotating, displacing and resizing them. Using the following settings:

    • Maximum rotation of 45 degrees in both directions
    • Maximum displacement of 20% times the width or height
    • Horizontal and vertical flip
    • Maximum shear range of 20% times the width
    • Pixel range of 10 units

    Data splitting

    The training dataset is 80% used for training, 10% for validation and 10% for prediction.

    Classes, labels and annotations

    The training dataset contains 2 classes with 2 labels 'good' and 'bad'. The 'good' images contain water images that are suited to determine the water quality using the Forel-Ule scale. The 'bad' images can include for example too much water reflection, a visible bottom surface, objects or not even include water at all.

    Parameters

    From the images the water quality can be obtained by comparing the water color to the 21 colors in the Forel-Ule scale.

    Parameter: http://vocab.nerc.ac.uk/collection/P01/current/CLFORULE/

    Data sources

    The images are taken by citizen scientists, often with a smartphone.

    Data quality

    As the images are taken by smartphones, the image quality can be low. Next to this, the images are taken outside, in a non-confined space, meaning that there can be bad lightning, reflections and other problems occurring. Therefore, the images need first to be checked before they can be included in the app.

    Image resolution

    Larger images are resized to 256px by 256px, smaller images are excluded from the training dataset.

    Spatial coverage

    Images are taken on a global scale.

    Contact information

    For more information on the training dataset and/or the app, you can contact tjerk@maris.nl.

  7. Counter Strike 2 Body and Head Classification

    • kaggle.com
    zip
    Updated Jan 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ömer Faruk Günaydın (2024). Counter Strike 2 Body and Head Classification [Dataset]. https://www.kaggle.com/datasets/merfarukgnaydn/counter-strike-2-body-and-head-classification
    Explore at:
    zip(1478346482 bytes)Available download formats
    Dataset updated
    Jan 7, 2024
    Authors
    Ömer Faruk Günaydın
    Description

    https://github.com/siromermer/CS2-CSGO-Yolov8-Yolov7-ObjectDetection

    1. ct_body
    2. ct_head
    3. t_body
    4. t_head

    in .yaml file there are 5 classes , but actual class number is 4 . When annotating images i mistakenly give blank space in classes.txt file and because of that there is empty class which is 0 in this case , but it wont create any problem , i just wanted to inform kaggle users . Dataset is little bit small for now , but as soos as possible i will increase image number for sure

  8. Compare different datasets using the same algorithm.

    • plos.figshare.com
    xls
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia (2025). Compare different datasets using the same algorithm. [Dataset]. http://doi.org/10.1371/journal.pone.0331011.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 7, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Compare different datasets using the same algorithm.

  9. Accuracy values of 30 categories according to CPPDE-YOLO model...

    • plos.figshare.com
    xls
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia (2025). Accuracy values of 30 categories according to CPPDE-YOLO model classification. [Dataset]. http://doi.org/10.1371/journal.pone.0331011.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 7, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accuracy values of 30 categories according to CPPDE-YOLO model classification.

  10. Precious Gemstone Identification

    • kaggle.com
    zip
    Updated Mar 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GauravKamath02 (2024). Precious Gemstone Identification [Dataset]. https://www.kaggle.com/datasets/gauravkamath02/precious-gemstone-identification
    Explore at:
    zip(7743109183 bytes)Available download formats
    Dataset updated
    Mar 28, 2024
    Authors
    GauravKamath02
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Precious Gemstone Identification

    Description: This comprehensive dataset comprises annotated images of a diverse range of precious gemstones meticulously curated for gemstone identification tasks. With 87 classes of gemstones for classification unique varieties including Chalcedony Blue, Amber, Aventurine Yellow, Dumortierite, Pearl, Aventurine Green, and many others, this dataset serves as a valuable resource for training and evaluating machine learning models in gemstone recognition.

    Gemstone Variety: The dataset encompasses a wide spectrum of precious gemstones, ranging from well-known varieties like Emerald, Ruby, Sapphire, and Diamond to lesser-known gems such as Benitoite, Larimar, and Sphene.

    Dataset Split: Train Set: 92% (46404 images) Validation Set: 4% (1932 images) Test Set: 4% (1932 images)

    Preprocessing: Images in the dataset have been preprocessed to ensure consistency and quality:

    • Auto-Orient: Applied to correct orientation inconsistencies.
    • Resize: Images are uniformly resized to 640x640 pixels.
    • Tiling: Organized into a grid of 3 rows x 2 columns for efficient processing.

    Augmentations: To enhance model robustness and generalization, each training example has been augmented with various transformations:

    • Flip: Horizontal and Vertical flips are applied.
    • Rotation: Random rotation between -15° and +15°.
    • Shear: Horizontal and Vertical shearing with a range of ±10°.
    • Saturation: Adjusted randomly between -15% and +15%.
    • Brightness: Random brightness adjustment between -10% and +10%.

    File Formats Available:

    • COCO Segmentation: COCO (Common Objects in Context) Segmentation format is commonly used for semantic segmentation tasks. It provides annotations for object segmentation, where each object instance is labeled with a mask indicating its outline.
    • COCO: COCO format is a widely used standard for object detection and instance segmentation tasks. It includes annotations for bounding boxes around objects, along with corresponding class labels and segmentation masks if applicable.
    • TensorFlow : TensorFlow format typically refers to a data format compatible with TensorFlow, a popular deep learning framework. It often includes annotations in a format suitable for training object detection and segmentation models using TensorFlow.
    • VOC: VOC (Visual Object Classes) format is a standard format for object detection and classification tasks. It includes annotations for bounding boxes around objects, along with class labels and metadata, following the PASCAL VOC dataset format.
    • YOLOv8-obb: YOLOv8-obb format is specific to the YOLO (You Only Look Once) object detection model architecture. It typically includes annotations for object bounding boxes in YOLO format, where each bounding box is defined by its center coordinates, width, height, and class label.
    • YOLOv9 Segmentation: YOLOv9 Segmentation format is tailored for semantic segmentation tasks using the YOLOv9 architecture. It provides annotations for pixel-wise segmentation masks corresponding to object instances, enabling accurate segmentation of objects in images.
    • Server Benchmark: The Server Benchmark format is used for annotated images with bounding boxes for object detection tasks. Each annotation entry in the JSON-like structure contains details about a specific object instance within an image.

    Disclaimer:

    The images included in this dataset were sourced from various online platforms, primarily from minerals.net and www.rasavgems.com websites, as well as other online datasets. We have curated and annotated these datasets for the purpose of gemstone identification and made them available in different formats. We do not claim ownership of the original images, and we do not claim to own these images. Any trademarks, logos, or copyrighted materials belong to their respective owners.

    Researchers, enthusiasts and developers interested in gemstone identification, machine learning, and computer vision applications will find this dataset invaluable for training and benchmarking gemstone recognition algorithms.

  11. f

    Possible configuration schemes for multiple PConv and PWConv.

    • figshare.com
    • plos.figshare.com
    xls
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia (2025). Possible configuration schemes for multiple PConv and PWConv. [Dataset]. http://doi.org/10.1371/journal.pone.0331011.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Nov 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Qingtao Zeng; Lixin Zhang; Jiefeng Zhao; Anping Xu; Yali Qi; Liqin Yu; Wenjing Li; Haochang Xia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Possible configuration schemes for multiple PConv and PWConv.

  12. Data from: Space Debris Detection Dataset

    • kaggle.com
    zip
    Updated Jun 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Zakria2001 (2024). Space Debris Detection Dataset [Dataset]. https://www.kaggle.com/datasets/muhammadzakria2001/space-debris-detection-dataset-for-yolov8
    Explore at:
    zip(75664742 bytes)Available download formats
    Dataset updated
    Jun 28, 2024
    Authors
    Muhammad Zakria2001
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Space Debris Detection Dataset

    Overview

    This dataset is designed for the detection and classification of space debris, aiming to enhance space situational awareness and contribute to the mitigation of space debris hazards. It provides annotated images suitable for training machine learning models in object detection tasks.

    Dataset Details

    • Total Images: 1,265
    • Classes: 11 ('cheops', 'debris', 'double_start', 'earth_observation_sat_1', 'lisa_pathfinder', 'proba_2', 'proba_3_csc', 'proba_3_ocs', 'smart_1', 'soho', 'xmm_newton')
    • Annotations: Each image is annotated with bounding boxes corresponding to the debris and satellite classes.

    Data Splits

    • Training Set: 88% (1,107 images)
    • Validation Set: 8% (106 images)
    • Test Set: 4% (52 images)

    Preprocessing and Augmentation

    • Preprocessing:

      • Auto-orientation applied to ensure consistent image orientation.
      • Images resized to 640x640 pixels using stretch resizing.
    • Augmentations:

      • Horizontal flips
      • 90° rotations (clockwise and counter-clockwise)
      • Addition of noise affecting up to 1.09% of pixels
      • Each training example generates 3 augmented versions to enhance model robustness.

    Applications

    This dataset is suitable for developing and evaluating object detection models focused on identifying space debris and satellites. Potential applications include:

    • Space Situational Awareness: Enhancing the tracking and monitoring of space objects to prevent collisions.
    • Autonomous Navigation: Assisting spacecraft in detecting and avoiding debris.
    • Research and Development: Serving as a benchmark for testing new algorithms in object detection and space debris identification.

    Citation

    If you utilize this dataset in your research or projects, please cite it as follows:

    @misc{space-debris-and-satellite-dataset,
     title = {Space Debris and Satellite Dataset},
     type = {Open Source Dataset},
     author = {Mahmoud},
     howpublished = {\url{https://universe.roboflow.com/mahmoud-xm4kv/space-debris-and-satilite}},
     url = {https://universe.roboflow.com/mahmoud-xm4kv/space-debris-and-satilite},
     journal = {Roboflow Universe},
     publisher = {Roboflow},
     year = {2024},
     month = {sep},
     note = {visited on 2024-10-04}
    }
    
  13. Balanced Scoliosis X-ray Dataset (YOLOv5 Format)

    • kaggle.com
    zip
    Updated Oct 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Salman (2025). Balanced Scoliosis X-ray Dataset (YOLOv5 Format) [Dataset]. https://www.kaggle.com/datasets/salmankey/balanced-scoliosis-x-ray-dataset-yolov5-format
    Explore at:
    zip(496021086 bytes)Available download formats
    Dataset updated
    Oct 9, 2025
    Authors
    Muhammad Salman
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset is a balanced and augmented version of the original Scoliosis Detection Dataset designed for deep learning and computer vision tasks, particularly spinal curvature classification using YOLOv5.

    It contains dermatoscopic-style spine X-ray images categorized into four classes based on the severity of scoliosis:

    1-derece → Mild scoliosis

    2-derece → Moderate scoliosis

    3-derece → Severe scoliosis

    saglikli → Healthy (no scoliosis)

    ⚙️ Data Details

    Train set: ../train/images

    Validation set: ../valid/images

    Test set: ../test/images

    Total Classes: 4

    Balanced Samples: Each class contains approximately 1259 images and labels

    Augmentations Applied:

    Rotation

    Brightness and contrast adjustment

    Horizontal flip

    Random zoom and cropping

    Gaussian noise

    These augmentations were used to improve model robustness and reduce class imbalance.

    🎯 Use Cases

    This dataset is ideal for:

    Scoliosis detection and classification research

    Object detection experiments (YOLOv5, YOLOv8, EfficientDet)

    Transfer learning on medical image datasets

    Model comparison and explainability studies

    📊 Source

    Originally sourced and preprocessed using Roboflow, then restructured and balanced manually for research and experimentation.

    Roboflow Project Link: 🔗 View on Roboflow

    🧠 License

    CC BY 4.0 — Free to use and share with attribution.

  14. Scoliosis YOLOv5 Annotated Spine X-ray Dataset

    • kaggle.com
    zip
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Salman (2025). Scoliosis YOLOv5 Annotated Spine X-ray Dataset [Dataset]. https://www.kaggle.com/datasets/salmankey/scoliosis-yolov5-annotated-spine-x-ray-dataset
    Explore at:
    zip(236099766 bytes)Available download formats
    Dataset updated
    Nov 7, 2025
    Authors
    Muhammad Salman
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    🩻 Scoliosis YOLOv5 — Annotated Spine X-ray Dataset

    This dataset is a curated and preprocessed collection of spinal X-ray images for deep learning–based scoliosis and vertebra detection using YOLOv5, YOLOv8, or other object detection frameworks.

    It contains high-quality annotated X-rays featuring multiple bounding boxes per image — each representing different spinal regions and conditions.

    🧩 Dataset Configuration

    train: scoliosis yolov5/train/images
    val: scoliosis yolov5/valid/images
    test: scoliosis yolov5/test/images
    
    nc: 3
    names: ['Vertebra', 'scoliosis spine', 'normal spine']
    

    ⚙️ Data Details

    • Train Set: /train/images
    • Validation Set: /valid/images
    • Test Set: /test/images
    • Total Classes: 3
    • Annotations: YOLOv5 format (.txt with class, x_center, y_center, width, height)
    • Image Format: .jpg / .png

    Classes Description:

    1. Vertebra — Individual vertebral structures localized across the spine.
    2. Scoliosis Spine — Spinal X-rays with visible curvature or deformation.
    3. Normal Spine — Straight, healthy spinal alignment with no abnormality.

    🧠 Augmentations Applied

    To improve model generalization and balance the dataset, the following augmentations were used:

    • Random rotation
    • Brightness and contrast adjustment
    • Horizontal flipping
    • Random zoom and cropping
    • Gaussian noise injection

    🎯 Use Cases

    This dataset is ideal for:

    • Scoliosis detection and classification
    • Vertebra localization and segmentation
    • Object detection model benchmarking (YOLOv5/YOLOv8)
    • Transfer learning on medical image datasets
    • Explainable AI research in healthcare

    📊 Source

    The dataset was processed and annotated using Roboflow, then refined and organized into YOLOv5 format for seamless training. Each image includes verified bounding boxes for vertebral and scoliosis regions.

    Roboflow Project Link: 🔗 View on Roboflow (add your Roboflow link here)

    🧾 License

    CC BY 4.0 — Free to use, modify, and redistribute with proper attribution.

  15. YOLOv8

    • kaggle.com
    zip
    Updated Nov 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chenjie (2025). YOLOv8 [Dataset]. https://www.kaggle.com/datasets/chenjiexu/yolov8
    Explore at:
    zip(2513634 bytes)Available download formats
    Dataset updated
    Nov 12, 2025
    Authors
    Chenjie
    Description

    Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.

    We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!

  16. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mike Stefanov, fencing-scoreboard-yolov8 [Dataset]. https://huggingface.co/datasets/mastefan/fencing-scoreboard-yolov8

fencing-scoreboard-yolov8

mastefan/fencing-scoreboard-yolov8

Explore at:
Authors
Mike Stefanov
Description

FENCING SCOREBOARD DATASET (YOLOv8 FORMAT)

Project: CMU Fencing Classification Project Author: Michael Stefanov (Carnegie Mellon University) License: MIT Date: 2025

  Description:

Labeled images of fencing scoreboards in lit and unlit states, used to train the YOLOv8 detection model. Includes augmented samples and negatives for robust learning.

  Dataset Summary:

Total Images: ~2000 Splits: train (1600), valid (400) Classes: 1 ("scoreboard") Format: YOLOv8… See the full description on the dataset page: https://huggingface.co/datasets/mastefan/fencing-scoreboard-yolov8.

Search
Clear search
Close search
Google apps
Main menu