26 datasets found
  1. R

    Object Detection For Mstar Imagery Dataset

    • universe.roboflow.com
    zip
    Updated Nov 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Corn (2024). Object Detection For Mstar Imagery Dataset [Dataset]. https://universe.roboflow.com/corn-y933v/object-detection-for-mstar-imagery/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 21, 2024
    Dataset authored and provided by
    Corn
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Armored Vehicles Bounding Boxes
    Description

    Exploring Object Detection Techniques for MSTAR IU Mixed Targets Dataset

    Introduction: The rapid advancements in machine learning and computer vision have significantly improved object detection capabilities. In this project, we aim to explore and develop object detection techniques specifically tailored to the MSTAR IU Mixed Targets. This dataset, provided by the Sensor Data Management System, offers a valuable resource for training and evaluating object detection models for synthetic aperture radar (SAR) imagery.

    Objective: Our primary objective is to develop an efficient and accurate object detection model that can identify and localize various targets within the MSTAR IU Mixed Targets dataset. By achieving this, we aim to enhance the understanding and applicability of SAR imagery in real-world scenarios, such as surveillance, reconnaissance, and military applications.

    Ethics: As responsible researchers, we recognize the importance of ethics in conducting our project. We are committed to ensuring the ethical use of data and adhering to privacy guidelines. The MSTAR IU Mixed Targets dataset provided by the Sensor Data Management System will be used solely for academic and research purposes. Any personal information or sensitive data within the dataset will be handled with utmost care and confidentiality.

    Data Attribution and Giving Credit: We deeply appreciate the Sensor Data Management System for providing the MSTAR IU Mixed Targets dataset. We understand the effort and resources invested in curating and maintaining this valuable dataset, which forms the foundation of our project. To acknowledge and give credit to the Sensor Data Management System, we will prominently mention their contribution in all project publications, reports, and presentations. We will provide appropriate citations and include a statement recognizing their dataset as the source of our training and evaluation data.

    Methodology:

    1. Data Preprocessing: We will preprocess the MSTAR IU Mixed Targets dataset to enhance its compatibility with YOLOv8 object detection algorithm. Involve resizing, normalizing, and augmenting the images.

    2. Training and Evaluation: The selected model will be trained on the preprocessed dataset, utilizing appropriate loss functions and optimization techniques. We will extensively evaluate the model's performance using standard evaluation metrics such as precision, recall, and mean average precision (mAP).

    3. Fine-tuning and Optimization: We will fine-tune the model on the MSTAR IU Mixed Targets dataset to enhance its accuracy and adaptability to SAR-specific features. Additionally, we will explore techniques such as transfer learning and data augmentation to further improve the model's performance.

    4. Results and Analysis: The final model's performance will be analyzed in terms of detection accuracy, computational efficiency, and generalization capability. We will conduct comprehensive experiments and provide visualizations to showcase the model's object detection capabilities on the MSTAR IU Mixed Targets dataset.

    5. Model Selection and Revaluation: We will evaluate and compare state-of-the-art object detection models to identify the most suitable architecture for SAR imagery. This will involve researching and implementing models such as Faster R-CNN, other YOLO versions or SSD, considering their performance, speed, and adaptability to the MSTAR dataset.

    Conclusion: This project aims to contribute to the field of object detection in SAR imagery by leveraging the valuable MSTAR IU Mixed Targets dataset provided by the Sensor Data Management System. We will ensure ethical use of the data and give proper credit to the dataset's source. By developing an accurate and efficient object detection model, we hope to advance the understanding and application of SAR imagery in various domains.

    Note: This project description serves as an overview and can be expanded upon in terms of specific methodologies, experiments, and evaluation techniques as the project progresses.

  2. R

    Cookbook Fine Tuning Dataset

    • universe.roboflow.com
    zip
    Updated Aug 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PTG (2023). Cookbook Fine Tuning Dataset [Dataset]. https://universe.roboflow.com/ptg/cookbook-fine-tuning
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 4, 2023
    Dataset authored and provided by
    PTG
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cooking Tools Bounding Boxes
    Description

    Cookbook Fine Tuning

    ## Overview
    
    Cookbook Fine Tuning is a dataset for object detection tasks - it contains Cooking Tools annotations for 4,399 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. R

    Fine Tuning Yolov5 Dataset

    • universe.roboflow.com
    zip
    Updated Oct 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hrithik Mhatre (2024). Fine Tuning Yolov5 Dataset [Dataset]. https://universe.roboflow.com/hrithik-mhatre-phujj/fine-tuning-yolov5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 5, 2024
    Dataset authored and provided by
    Hrithik Mhatre
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Vehices Bounding Boxes
    Description

    Fine Tuning Yolov5

    ## Overview
    
    Fine Tuning Yolov5 is a dataset for object detection tasks - it contains Vehices annotations for 3,001 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. R

    Fine Tuning Yolo Model Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Datafors (2025). Fine Tuning Yolo Model Dataset [Dataset]. https://universe.roboflow.com/datafors/fine-tuning-yolo-model/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    Datafors
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Firearm Age Smoke Cigarette Bounding Boxes
    Description

    Fine Tuning Yolo Model

    ## Overview
    
    Fine Tuning Yolo Model is a dataset for object detection tasks - it contains Firearm Age Smoke Cigarette annotations for 1,657 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. R

    Fine Tuning Dataset

    • universe.roboflow.com
    zip
    Updated Sep 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    labeling (2024). Fine Tuning Dataset [Dataset]. https://universe.roboflow.com/labeling-2vg0y/fine-tuning-pw8a8/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 18, 2024
    Dataset authored and provided by
    labeling
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Animals Bounding Boxes
    Description

    Fine Tuning

    ## Overview
    
    Fine Tuning is a dataset for object detection tasks - it contains Animals annotations for 881 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. Signature

    • kaggle.com
    Updated Feb 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ultralytics (2025). Signature [Dataset]. http://doi.org/10.34740/kaggle/dsv/10792607
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 19, 2025
    Dataset provided by
    Kaggle
    Authors
    Ultralytics
    License

    http://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html

    Description

    This dataset focuses on detecting human written signatures within documents. It includes a variety of document types with annotated signatures, providing valuable insights for applications in document verification and fraud detection. Essential for training computer vision algorithms, this dataset aids in identifying signatures in various document formats, supporting research and practical applications in document analysis.

    Dataset Structure

    The signature detection dataset is split into three subsets:

    • Training set: Contains 143 images, each with corresponding annotations.
    • Validation set: Includes 35 images, each with paired annotations.

    Applications

    This dataset can be applied in various computer vision tasks such as object detection, object tracking, and document analysis. Specifically, it can be used to train and evaluate models for identifying signatures in documents, which can have applications in document verification, fraud detection, and archival research. Additionally, it can serve as a valuable resource for educational purposes, enabling students and researchers to study and understand the characteristics and behaviors of signatures in different document types.

    Usage

    Train

    To train a YOLO11n model on the signature dataset for 100 epochs with an image size of 640, use the following examples. For detailed arguments, refer to the model's Training page.

    from ultralytics import YOLO
    
    # Load a model
    model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
    
    # Train the model
    results = model.train(data="signature.yaml", epochs=100, imgsz=640)
    ```
    
    ### Predict
    
    ```python
    from ultralytics import YOLO
    
    # Load a model
    model = YOLO("path/to/best.pt") # load a fine-tuned model
    
    # Inference using the model
    results = model.predict("https://ultralytics.com/assets/signature-s.mp4")
    ```
    
    Learn more ➡️ https://docs.ultralytics.com/datasets/detect/signature/
    
  7. d

    Habitat Maps for the Cache Creek Settling Basin, Yolo County, California

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Habitat Maps for the Cache Creek Settling Basin, Yolo County, California [Dataset]. https://catalog.data.gov/dataset/habitat-maps-for-the-cache-creek-settling-basin-yolo-county-california-810f2
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Yolo County, California, Cache Creek Settling Basin
    Description

    The geospatial data presented here as ArcGIS layers denote landcover/landuse classifications to support field sampling efforts that occurred within the Cache Creek Settling Basin (CCSB) from 2010-2019. Manual photointerpretation of a National Agriculture Imagery Program (NAIP) dataset collected in 2012 was used to characterize landcover/landuse categories (hereafter habitat classes). Initially 9 categories were assigned based on vegetation structure (Vegtype1). These were then parsed into two levels of habitat classes that were chosen for their representativeness and use for statistical analyses of field sampling. At the coarsest level (Landcover 1), five habitat classes were assigned: Agriculture, Riparian, Floodplain, Open Water, and Road. At the more refined level (Landcover 2), ten habitat classes were nested within these five categories. Agriculture was not further refined within Landcover 2, as little consistency was expected between years as fields rotated between corn, pumpkin, tomatoes, and other row crops. Riparian habitat, marked by large canopy trees (such as Populus fremontii (cottonwood)) neighboring stream channels, also was not further refined. Floodplain habitat was separated into two categories: Mixed NonWoody (which included both Mowed and Barren habitats) and Mixed Woody. This separation of the floodplain habitat class (Landcover1) into Woody and NonWoody was performed with a 100 m2 moving window analysis in ArcGIS, where habitats were designated as either ≥50% shrub or tree cover (Woody) or <50%, and thus dominated by herbaceous vegetation cover (NonWoody). Open Water habitat was refined to consider both agricultural Canal (created) and Stream (natural) habitats. Road habitat was refined to separate Levee Roads (which included both the drivable portion and the apron on either side) and Interior roads, which were less managed. The map was tested for errors of omission and commission on the initial 9 categories during November 2014. Random points (n=100) were predetermined, and a total of 80 were selected for field verification. Type 1 (false positive) and Type 2 (false negative) errors were assessed. The survey indicated several corrections necessary in the final version of the map. 1) We noted the presence of woody species in “NonWoody” habitats, especially Baccharus salicilifolia (mulefat). Habitats were thus classified as “Woody” only with ≥50% presence of canopy species (e.g. tamarisk, black willow) 2) Riparian sites were over-characterized, and thus constrained back to “near stream channels only”. Walnut (Juglans spp) and willow stands alongside fields and irrigation canals were changed to Mixed Woody Floodplain. Fine tuning the final habitat distributions was thus based on field reconnaissance, scalar needs for classifying field data (sediment, water, bird, and fish collections), and validation of data categories using species observations from scientist field notes. Calibration was made using point data from the random survey and scientist field notes, to remove all sources of error and reach accuracy of 100%. The coverage “CCSB_Habitat_2012” is provided as an ARCGIS shapefile based on a suite of 7 interconnected ARCGIS files coded with the suffixes: cpg, dbf, sbn, sbx, shp, shx, and prj. Each file provides a component of the coverage (such as database or projection) and all files are necessary to open the “CCSB_Habitat_2012.shp” file with full functionality. CCSB_Basin_Map.png represents the CCSB study area color coded by the four primary habitat types identified in this study.

  8. LVIS Fruits And Vegetables Dataset

    • kaggle.com
    Updated Jun 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Henning Heyen (2024). LVIS Fruits And Vegetables Dataset [Dataset]. https://www.kaggle.com/datasets/henningheyen/lvis-fruits-and-vegetables-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 13, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Henning Heyen
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The Dataset

    • The dataset is a subset of the LVIS dataset which consists of 160k images and 1203 classes for object detection. It is originally COCO-formatted (.json based).
    • The dataset has been converted from COCO format (.json) to YOLO format (.txt based)
    • All images that do not contain any fruits or images have been removed, resulting in 8221 images and 63 classes (6721train, 1500 validation). Additional 180 test images have been manually labelled with Roboflow.
    • The classes cover the most common fruits and vegetables

    Other Ressources

    • The LVIS-Fruits-And-Vegetables-Dataset has also been uploaded to

    • Three YOLOv8 baseline models have been fine tuned on this dataset. You can test them online here

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F21266407%2F30b5e0de77aa310214d12a1c35328762%2Finference_api.png?generation=1718316267014515&alt=media" alt="">

    Dataset Classes

    0: almond 1: apple 2: apricot 3: artichoke 4: asparagus 5: avocado 6: banana 7: bean curd/tofu 8: bell pepper/capsicum 9: blackberry 10: blueberry 11: broccoli 12: brussels sprouts 13: cantaloup/cantaloupe 14: carrot 15: cauliflower 16: cayenne/cayenne spice/cayenne pepper/cayenne pepper spice/red pepper/red pepper 17: celery 18: cherry 19: chickpea/garbanzo 20: chili/chili vegetable/chili pepper/chili pepper vegetable/chilli/chilli vegetable/chilly/chilly 21: clementine 22: coconut/cocoanut 23: edible corn/corn/maize 24: cucumber/cuke 25: date/date fruit 26: eggplant/aubergine 27: fig/fig fruit 28: garlic/ail 29: ginger/gingerroot 30: Strawberry 31: gourd 32: grape 33: green bean 34: green onion/spring onion/scallion 35: Tomato 36: kiwi fruit 37: lemon 38: lettuce 39: lime 40: mandarin orange 41: melon 42: mushroom 43: onion 44: orange/orange fruit 45: papaya 46: pea/pea food 47: peach 48: pear 49: persimmon 50: pickle 51: pineapple 52: potato 53: prune 54: pumpkin 55: radish/daikon 56: raspberry 57: strawberry 58: sweet potato 59: tomato 60: turnip 61: watermelon 62: zucchini/courgette

    Train Set Examples

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F21266407%2Fcf93e16c5fe0c358ce91d498a3d2f422%2Flvis2.jpg?generation=1718315685732582&alt=media" alt="">

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F21266407%2Fa994997ac74b210d252c0ce8b8e112cb%2Flvis3.jpg?generation=1718315694837084&alt=media" alt="">

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F21266407%2F5064e0078d546ac9dc7cd83f9254607e%2Flvis4.jpg?generation=1718315866827872&alt=media" alt="">

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F21266407%2F10cf68a1e48e77a3cfdfe4d91dda85f0%2Flvis6.jpg?generation=1718315883301670&alt=media" alt="">

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F21266407%2F7fe794572c475812725c5c737471bd84%2Flvis7.jpg?generation=1718315891418276&alt=media" alt="">

  9. Z

    Labeled 17 Hardwood Species and 55 Genotypes of Populus Stomatal Images...

    • data.niaid.nih.gov
    Updated Aug 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wang, Jiaxin (2023). Labeled 17 Hardwood Species and 55 Genotypes of Populus Stomatal Images Datasets [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8266240
    Explore at:
    Dataset updated
    Aug 21, 2023
    Dataset provided by
    Renninger, Heidi
    Wang, Jiaxin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Research has indicated the potential of using machine learning algorithms to detect and measure stomata automatically. However, the current limitation for further improving and fine-tuning machine learning-based stomatal study methods is due to the small, inconsistent, and monotypic nature of stomatal datasets, which are also not easily accessible. To address this issue, our collection comprises about 11,000 unique images of hardwood leaf stomata gathered from projects conducted between 2015 and 2020-2022. The dataset includes over 7,000 images of 17 frequently encountered hardwood species, including oak, maple, ash, elm, and hickory, as well as over 3,000 images of 55 genotypes from seven Populus taxa (as detailed in Table 1). Each image has been labeled as either stomata (stomatal aperture only) or whole_stomata (stomatal aperture and guard cells) and has a corresponding YOLO label file that can be transformed to other annotation formats. These images and labels are publicly available, making it easier to train machine-learning models and examine leaf stomatal traits. By utilizing our dataset, users can (1) use state-of-the-art machine learning models to identify, count, and quantify leaf stomata; (2) investigate the diverse range of stomatal characteristics across different types of hardwood trees; and (3) create new indices for measuring stomata.

  10. R

    OA-SLAM data/weights

    • entrepot.recherche.data.gouv.fr
    bin, txt
    Updated Jan 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthieu Zins; Matthieu Zins (2023). OA-SLAM data/weights [Dataset]. http://doi.org/10.12763/2CZWJP
    Explore at:
    bin(84849890), bin(84003585), bin(248280568), bin(85152299), bin(42806829), bin(83701176), bin(388170162), bin(42210477), bin(231954202), txt(35146), bin(225385074), txt(400), txt(250), bin(130390099), bin(213792319)Available download formats
    Dataset updated
    Jan 19, 2023
    Dataset provided by
    Recherche Data Gouv
    Authors
    Matthieu Zins; Matthieu Zins
    License

    https://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html

    Description

    Test sequences of two indoor scenes used to evaluate semantic visual SLAM (Simultaneous Localization And Mapping). This repository also contains Yolo v5 weights for object detections, either pretrained on COCO dataset or fine-tuned on statues and museum objects. This data can be used to run OA-SLAM (Object-Aided SLAM), available at https://gitlab.inria.fr/tangram/oa-slam.

  11. R

    Falldetection Dataset

    • universe.roboflow.com
    zip
    Updated Mar 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yadvenderresearch (2025). Falldetection Dataset [Dataset]. https://universe.roboflow.com/yadvenderresearch/falldetection-3cm2z/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 6, 2025
    Dataset authored and provided by
    yadvenderresearch
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    People Bounding Boxes
    Description

    This dataset is created to fine-tune YOLO model to detect a fall and has 7 labels: - bending - fall - kneeling - near-fall - sitting - standing - walking

  12. R

    Receipt_ocr_fine_tuning Dataset

    • universe.roboflow.com
    zip
    Updated Nov 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    workspace (2024). Receipt_ocr_fine_tuning Dataset [Dataset]. https://universe.roboflow.com/workspace-ukpte/receipt_ocr_fine_tuning/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 23, 2024
    Dataset authored and provided by
    workspace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Store Bounding Boxes
    Description

    Receipt_OCR_fine_tuning

    ## Overview
    
    Receipt_OCR_fine_tuning is a dataset for object detection tasks - it contains Store annotations for 200 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  13. Brain-tumor

    • kaggle.com
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ultralytics (2024). Brain-tumor [Dataset]. http://doi.org/10.34740/kaggle/dsv/10294189
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 25, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ultralytics
    License

    http://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html

    Description

    A brain tumor detection dataset consists of medical images from MRI or CT scans, containing information about brain tumor presence, location, and characteristics. This dataset is essential for training computer vision algorithms to automate brain tumor identification, aiding in early diagnosis and treatment planning.

    Dataset Structure

    The brain tumor dataset is divided into two subsets:

    • Training set: Consisting of 893 images, each accompanied by corresponding annotations.
    • Testing set: Comprising 223 images, with annotations paired for each one.

    Applications

    The application of brain tumor detection using computer vision enables early diagnosis, treatment planning, and monitoring of tumor progression. By analyzing medical imaging data like MRI or CT scans, computer vision systems assist in accurately identifying brain tumors, aiding in timely medical intervention and personalized treatment strategies.

    Usage

    Train

    To train a YOLO11n model on the brain tumor dataset for 100 epochs with an image size of 640, utilize the provided code snippets. For a detailed list of available arguments, consult the model's Training page.

    from ultralytics import YOLO
    
    # Load a model
    model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
    
    # Train the model
    results = model.train(data="brain-tumor.yaml", epochs=100, imgsz=640)
    

    Predict

    from ultralytics import YOLO
    
    # Load a model
    model = YOLO("path/to/best.pt") # load a brain-tumor fine-tuned model
    
    # Inference using the model
    results = model.predict("https://ultralytics.com/assets/brain-tumor-sample.jpg")
    

    Learn more ➡️ https://docs.ultralytics.com/datasets/detect/brain-tumor/

  14. R

    Swir Rgb Dataset

    • universe.roboflow.com
    zip
    Updated Feb 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BLIP fine tune (2025). Swir Rgb Dataset [Dataset]. https://universe.roboflow.com/blip-fine-tune/swir-rgb
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 4, 2025
    Dataset authored and provided by
    BLIP fine tune
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    CAR TRUCK PEOPLE BIKE Bounding Boxes
    Description

    SWIR RGB

    ## Overview
    
    SWIR RGB is a dataset for object detection tasks - it contains CAR TRUCK PEOPLE BIKE annotations for 665 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  15. R

    Hollow Knight Dataset

    • universe.roboflow.com
    zip
    Updated Jan 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hollow Knight Dataset (2025). Hollow Knight Dataset [Dataset]. https://universe.roboflow.com/hollow-knight-dataset/hollow-knight/dataset/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 23, 2025
    Dataset authored and provided by
    Hollow Knight Dataset
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mana Health_bar Player... Bounding Boxes
    Description

    Project Description for Roboflow

    Project Title: Hollow Knight Object Detection for Reinforcement Learning Agent

    Description:
    This project focuses on developing an object detection model tailored to the popular game Hollow Knight. The goal is to detect and classify various in-game elements in real-time to create a dataset that powers a reinforcement learning (RL) agent. This agent will use the detected objects as inputs to interact with the game environment, make decisions, and achieve specific objectives such as defeating enemies, collecting items, and progressing through the game.

    The object detection model will classify key elements in the game into the following 10 classes:

    1. Mana: Represents the player's magic reserve, displayed as a circular indicator that fills with white. Additionally, smaller circles may appear, representing extra magic capacity.
    2. Health Bar: Displays the player’s health as a series of masks that fill with white. Each mask corresponds to one unit of health.
    3. HK (Hollow Knight): The main character's position, representing the player’s sprite on the screen.
    4. Enemy: Any hostile character or entity that can attack or be attacked by the player. Includes grounded, flying, and stationary enemies.
    5. Collectible Item: Objects that can be picked up or interacted with to provide rewards such as Geo (currency), life fountains, or station stops.
    6. Bench: Resting spots where the player can save progress and heal.
    7. Upgrade Item: Rare collectibles that permanently enhance abilities or stats, such as health or mana upgrades.
    8. Key Item: Special objects necessary for game progression, such as keys or crests.
    9. Exit: Doorways, breakable barriers, or hidden passages that transition the player to new areas.
    10. NPC: Non-hostile characters that the player can interact with for trading, quests, or story progression.

    The object detection system will enable the RL agent to process and interpret the game environment, enabling intelligent decision-making.

    Purpose and Objectives:

    1. Object Detection:
      Develop a robust YOLO-based object detection model to identify and classify game elements from video frames.

    2. Reinforcement Learning (RL):
      Utilize the outputs of the object detection system (e.g., bounding boxes and class predictions) as the state inputs for an RL algorithm. The RL agent will learn to perform tasks such as:

      • Navigating through the game world.
      • Interacting with NPCs.
      • Avoiding or defeating enemies.
      • Collecting items and managing resources like health and mana.
    3. Dynamic Adaptation:
      Begin training the RL agent with a limited dataset of annotated images, gradually expanding the dataset to improve model performance and adaptability as more scenarios are introduced.

    4. Automation:
      The ultimate goal is to automate the gameplay of Hollow Knight, enabling the agent to mimic human-like decision-making.

    Integration Plan:

    1. Object Detection Training:
      Use Roboflow for data preprocessing, annotation, augmentation, and model training. Generate a YOLO-compatible dataset and fine-tune the model for detecting the 10 classes.

    2. Reinforcement Learning Agent:
      Implement a deep RL algorithm (e.g., Deep Q-Networks (DQN) or Proximal Policy Optimization (PPO)).

      • State Input: The bounding boxes and class probabilities from the object detection model.
      • Actions: Movement (e.g., left, right, jump), interactions (e.g., attack, heal, cast spells), and resource management.
      • Rewards: Positive rewards for objectives like collecting items or defeating enemies, and negative rewards for losing health or failing objectives.
    3. Feedback Loop:
      The RL agent's actions will be fed back into the game, generating new frames that the object detection model processes, creating a closed loop for training and evaluation.

  16. R

    Palm Fruit Ripeness Detection Dataset

    • universe.roboflow.com
    zip
    Updated Apr 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fine Tuning (2024). Palm Fruit Ripeness Detection Dataset [Dataset]. https://universe.roboflow.com/fine-tuning/palm-fruit-ripeness-detection-f6sac
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 9, 2024
    Dataset authored and provided by
    Fine Tuning
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Palm Fruit Bounding Boxes
    Description

    Palm Fruit Ripeness Detection

    ## Overview
    
    Palm Fruit Ripeness Detection is a dataset for object detection tasks - it contains Palm Fruit annotations for 4,160 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  17. R

    Transfer Learning Stage 1 : From Green Dataset

    • universe.roboflow.com
    zip
    Updated Feb 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    fine tuning data (2024). Transfer Learning Stage 1 : From Green Dataset [Dataset]. https://universe.roboflow.com/fine-tuning-data/transfer-learning-stage-1-from-green-dataset
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 23, 2024
    Dataset authored and provided by
    fine tuning data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Microorganisms Bounding Boxes
    Description

    We upload the EMDS-7 dataset of microscopy images of environmental microorganisms which is publicly available here : https://figshare.com/articles/dataset/EMDS-7_DataSet/16869571 and here is the research article that was published which explains what computer vision algorithms achieved when applied to the dataset https://arxiv.org/abs/2110.07723, "EMDS-7: Environmental Microorganism Image Dataset Seventh Version for Multiple Object Detection Evaluation", Hechen Y et al. . We are not claiming any credit from the dataset, we only retrieved it from the research team's public media. We were not able to find a dictionary mapping the labels names to the raw classes (there are 42 classes including class "unknown"). It didn't matter to us as our aim is to transfer learn with EMDS-7 after which stage we would apply a second stage fine-tuning on an extremely small manually (by ourselves) annotated dataset. This dataset my bias your computer vision model to detect objects if the background is greenish, also some of the images have the scale of the size which might biais models to detect objects more if there's a scale on test/ future images. If you have any remarks or copyrights issues, we are reachable here : thomas.sadigh@gmail.com

  18. R

    Supermarket Items (yolov7) Dataset

    • universe.roboflow.com
    zip
    Updated Jan 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EndeXspace (2025). Supermarket Items (yolov7) Dataset [Dataset]. https://universe.roboflow.com/endexspace/supermarket-items-yolov7
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 25, 2025
    Dataset authored and provided by
    EndeXspace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Grocery_items Bounding Boxes
    Description

    This dataset will be used to fine-tune a YOLOv7 model to identify grocery store items in users' fridges and cupboards.

    Images and annotations in this project were not taken or made by me but rather sourced from: - https://universe.roboflow.com/northumbria-university-newcastle/smart-refrigerator-zryjr/dataset/1 - https://www.kaggle.com/datasets/surendraallam/refrigerator-contents?resource=download

    This dataset was curated for an assignment of my Introduction to Machine Learning course.

  19. R

    Fyp Dataset

    • universe.roboflow.com
    zip
    Updated Oct 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wen Yang Lim (2022). Fyp Dataset [Dataset]. https://universe.roboflow.com/wen-yang-lim/fyp-lbrhe/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 14, 2022
    Dataset authored and provided by
    Wen Yang Lim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Letters Bounding Boxes
    Description

    A dataset and fine-tuned model for recognizing identifiers on container trucks. Combine with an OCR (optical character recognition) package to ID vehicles passing a checkpoint via a security camera feed or traffic cam.

    The project includes several exported versions, and a fine-tuned model that can be used in the cloud or on an edge device.

  20. R

    Mosquito Dataset

    • universe.roboflow.com
    zip
    Updated Jun 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luis Augusto Silva (2025). Mosquito Dataset [Dataset]. https://universe.roboflow.com/luis-augusto-silva-bq4bv/mosquito-suh0p/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 16, 2025
    Dataset authored and provided by
    Luis Augusto Silva
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mosquito Breading Bounding Boxes
    Description

    Automatic Detection of Mosquito Breeding Grounds using YOLOv7 and Transformer Prediction Head. This project implements a deep learning algorithm for the automatic detection of mosquito breeding grounds in images and videos. The algorithm is based on a pre-trained YOLOv7 model with a Transformer Prediction Head, fine-tuned on a custom dataset of mosquito breeding sites. The dataset was created from a subset of the SMT Lab (UFRJ) dataset and includes 5,094 annotated images. The algorithm can detect mosquito breeding sites in real-world scenarios with various lighting and weather conditions, making it a useful tool for public health officials and researchers. This project was developed as part of a challenge for a conference on the detection of mosquito breeding grounds.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Corn (2024). Object Detection For Mstar Imagery Dataset [Dataset]. https://universe.roboflow.com/corn-y933v/object-detection-for-mstar-imagery/model/3

Object Detection For Mstar Imagery Dataset

object-detection-for-mstar-imagery

object-detection-for-mstar-imagery-dataset

Explore at:
zipAvailable download formats
Dataset updated
Nov 21, 2024
Dataset authored and provided by
Corn
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Armored Vehicles Bounding Boxes
Description

Exploring Object Detection Techniques for MSTAR IU Mixed Targets Dataset

Introduction: The rapid advancements in machine learning and computer vision have significantly improved object detection capabilities. In this project, we aim to explore and develop object detection techniques specifically tailored to the MSTAR IU Mixed Targets. This dataset, provided by the Sensor Data Management System, offers a valuable resource for training and evaluating object detection models for synthetic aperture radar (SAR) imagery.

Objective: Our primary objective is to develop an efficient and accurate object detection model that can identify and localize various targets within the MSTAR IU Mixed Targets dataset. By achieving this, we aim to enhance the understanding and applicability of SAR imagery in real-world scenarios, such as surveillance, reconnaissance, and military applications.

Ethics: As responsible researchers, we recognize the importance of ethics in conducting our project. We are committed to ensuring the ethical use of data and adhering to privacy guidelines. The MSTAR IU Mixed Targets dataset provided by the Sensor Data Management System will be used solely for academic and research purposes. Any personal information or sensitive data within the dataset will be handled with utmost care and confidentiality.

Data Attribution and Giving Credit: We deeply appreciate the Sensor Data Management System for providing the MSTAR IU Mixed Targets dataset. We understand the effort and resources invested in curating and maintaining this valuable dataset, which forms the foundation of our project. To acknowledge and give credit to the Sensor Data Management System, we will prominently mention their contribution in all project publications, reports, and presentations. We will provide appropriate citations and include a statement recognizing their dataset as the source of our training and evaluation data.

Methodology:

  1. Data Preprocessing: We will preprocess the MSTAR IU Mixed Targets dataset to enhance its compatibility with YOLOv8 object detection algorithm. Involve resizing, normalizing, and augmenting the images.

  2. Training and Evaluation: The selected model will be trained on the preprocessed dataset, utilizing appropriate loss functions and optimization techniques. We will extensively evaluate the model's performance using standard evaluation metrics such as precision, recall, and mean average precision (mAP).

  3. Fine-tuning and Optimization: We will fine-tune the model on the MSTAR IU Mixed Targets dataset to enhance its accuracy and adaptability to SAR-specific features. Additionally, we will explore techniques such as transfer learning and data augmentation to further improve the model's performance.

  4. Results and Analysis: The final model's performance will be analyzed in terms of detection accuracy, computational efficiency, and generalization capability. We will conduct comprehensive experiments and provide visualizations to showcase the model's object detection capabilities on the MSTAR IU Mixed Targets dataset.

  5. Model Selection and Revaluation: We will evaluate and compare state-of-the-art object detection models to identify the most suitable architecture for SAR imagery. This will involve researching and implementing models such as Faster R-CNN, other YOLO versions or SSD, considering their performance, speed, and adaptability to the MSTAR dataset.

Conclusion: This project aims to contribute to the field of object detection in SAR imagery by leveraging the valuable MSTAR IU Mixed Targets dataset provided by the Sensor Data Management System. We will ensure ethical use of the data and give proper credit to the dataset's source. By developing an accurate and efficient object detection model, we hope to advance the understanding and application of SAR imagery in various domains.

Note: This project description serves as an overview and can be expanded upon in terms of specific methodologies, experiments, and evaluation techniques as the project progresses.

Search
Clear search
Close search
Google apps
Main menu