2 datasets found
  1. R

    Symbols Labelbox Json To Yolov7 Pytorch Dataset

    • universe.roboflow.com
    zip
    Updated Nov 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zakarias Hedenfalk (2022). Symbols Labelbox Json To Yolov7 Pytorch Dataset [Dataset]. https://universe.roboflow.com/zakarias-hedenfalk/symbols-labelbox-json-to-yolov7-pytorch
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 12, 2022
    Dataset authored and provided by
    Zakarias Hedenfalk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Symbols Bounding Boxes
    Description

    Symbols Labelbox JSON To YoloV7 PyTorch

    ## Overview
    
    Symbols Labelbox JSON To YoloV7 PyTorch is a dataset for object detection tasks - it contains Symbols annotations for 560 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. d

    Data from: Efficient imaging and computer vision detection of two cell...

    • datasets.ai
    • agdatacommons.nal.usda.gov
    • +1more
    23, 57
    Updated Sep 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Agriculture (2023). Data from: Efficient imaging and computer vision detection of two cell shapes in young cotton fibers [Dataset]. https://datasets.ai/datasets/data-from-efficient-imaging-and-computer-vision-detection-of-two-cell-shapes-in-young-cott-6c5dd
    Explore at:
    57, 23Available download formats
    Dataset updated
    Sep 6, 2023
    Dataset authored and provided by
    Department of Agriculture
    Description

    Methods

    Cotton plants were grown in a well-controlled greenhouse in the NC State Phytotron as described previously (Pierce et al, 2019). Flowers were tagged on the day of anthesis and harvested three days post anthesis (3 DPA). The distinct fiber shapes had already formed by 2 DPA (Stiff and Haigler, 2016; Graham and Haigler, 2021), and fibers were still relatively short at 3 DPA, which facilitated the visualization of multiple fiber tips in one image.

    Cotton fiber sample preparation, digital image collection, and image analysis:

    Ovules with attached fiber were fixed in the greenhouse. The fixative previously used (Histochoice) (Stiff and Haigler, 2016; Pierce et al., 2019; Graham and Haigler, 2021) is obsolete, which led to testing and validation of another low-toxicity, formalin-free fixative (#A5472; Sigma-Aldrich, St. Louis, MO; Fig. S1). The boll wall was removed without damaging the ovules. (Using a razor blade, cut away the top 3 mm of the boll. Make about 1 mm deep longitudinal incisions between the locule walls, and finally cut around the base of the boll.) All of the ovules with attached fiber were lifted out of the locules and fixed (1 h, RT, 1:10 tissue:fixative ratio) prior to optional storage at 4°C. Immediately before imaging, ovules were examined under a stereo microscope (incident light, black background, 31X) to select three vigorous ovules from each boll while avoiding drying. Ovules were rinsed (3 x 5 min) in buffer [0.05 M PIPES, 12 mM EGTA. 5 mM EDTA and 0.1% (w/v) Tween 80, pH 6.8], which had lower osmolarity than a microtubule-stabilizing buffer used previously for aldehyde-fixed fibers (Seagull, 1990; Graham and Haigler, 2021). While steadying an ovule with forceps, one to three small pieces of its chalazal end with attached fibers were dissected away using a small knife (#10055-12; Fine Science Tools, Foster City, CA). Each ovule piece was placed in a single well of a 24-well slide (#63430-04; Electron Microscopy Sciences, Hatfield, PA) containing a single drop of buffer prior to applying and sealing a 24 x 60 mm coverslip with vaseline.

    Samples were imaged with brightfield optics and default settings for the 2.83 mega-pixel, color, CCD camera of the Keyence BZ-X810 imaging system (www.keyence.com; housed in the Cellular and Molecular Imaging Facility of NC State). The location of each sample in the 24-well slides was identified visually using a 2X objective and mapped using the navigation function of the integrated Keyence software. Using the 10X objective lens (plan-apochromatic; NA 0.45) and 60% closed condenser aperture setting, a region with many fiber apices was selected for imaging using the multi-point and z-stack capture functions. The precise location was recorded by the software prior to visual setting of the limits of the z-plane range (1.2 µm step size). Typically, three 24-sample slides (representing three accessions) were set up in parallel prior to automatic image capture. The captured z-stacks for each sample were processed into one two-dimensional image using the full-focus function of the software. (Occasional samples contained too much debris for computer vision to be effective, and these were reimaged.)


    Resources in this dataset:

    • Resource Title: Deltapine 90 - Manually Annotated Training Set.

      File Name: GH3 DP90 Keyence 1_45 JPEG.zip

      Resource Description: These images were manually annotated in Labelbox.


    • Resource Title: Deltapine 90 - AI-Assisted Annotated Training Set.

      File Name: GH3 DP90 Keyence 46_101 JPEG.zip

      Resource Description: These images were AI-labeled in RoboFlow and then manually reviewed in RoboFlow.


    • Resource Title: Deltapine 90 - Manually Annotated Training-Validation Set.

      File Name: GH3 DP90 Keyence 102_125 JPEG.zip

      Resource Description: These images were manually labeled in LabelBox, and then used for training-validation for the machine learning model.


    • Resource Title: Phytogen 800 - Evaluation Test Images.

      File Name: Gb cv Phytogen 800.zip

      Resource Description: These images were used to validate the machine learning model. They were manually annotated in ImageJ.


    • Resource Title: Pima 3-79 - Evaluation Test Images.

      File Name: Gb cv Pima 379.zip

      Resource Description: These images were used to validate the machine learning model. They were manually annotated in ImageJ.


    • Resource Title: Pima S-7 - Evaluation Test Images.

      File Name: Gb cv Pima S7.zip

      Resource Description: These images were used to validate the machine learning model. They were manually annotated in ImageJ.


    • Resource Title: Coker 312 - Evaluation Test Images.

      File Name: Gh cv Coker 312.zip

      Resource Description: These images were used to validate the machine learning model. They were manually annotated in ImageJ.


    • Resource Title: Deltapine 90 - Evaluation Test Images.

      File Name: Gh cv Deltapine 90.zip

      Resource Description: These images were used to validate the machine learning model. They were manually annotated in ImageJ.


    • Resource Title: Half and Half - Evaluation Test Images.

      File Name: Gh cv Half and Half.zip

      Resource Description: These images were used to validate the machine learning model. They were manually annotated in ImageJ.


    • Resource Title: Fiber Tip Annotations - Manual.

      File Name: manual_annotations.coco_.json

      Resource Description: Annotations in COCO.json format for fibers. Manually annotated in Labelbox.


    • Resource Title: Fiber Tip Annotations - AI-Assisted.

      File Name: ai_assisted_annotations.coco_.json

      Resource Description: Annotations in COCO.json format for fibers. AI annotated with human review in Roboflow.


    • Resource Title: Model Weights (iteration 600).

      File Name: model_weights.zip

      Resource Description: The final model, provided as a zipped Pytorch .pth file. It was chosen at training iteration 600. The model weights can be imported for use of the fiber tip type detection neural network in Python.

      Resource Software Recommended: Google Colab,url: https://research.google.com/colaboratory/

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Zakarias Hedenfalk (2022). Symbols Labelbox Json To Yolov7 Pytorch Dataset [Dataset]. https://universe.roboflow.com/zakarias-hedenfalk/symbols-labelbox-json-to-yolov7-pytorch

Symbols Labelbox Json To Yolov7 Pytorch Dataset

symbols-labelbox-json-to-yolov7-pytorch

symbols-labelbox-json-to-yolov7-pytorch-dataset

Explore at:
zipAvailable download formats
Dataset updated
Nov 12, 2022
Dataset authored and provided by
Zakarias Hedenfalk
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Symbols Bounding Boxes
Description

Symbols Labelbox JSON To YoloV7 PyTorch

## Overview

Symbols Labelbox JSON To YoloV7 PyTorch is a dataset for object detection tasks - it contains Symbols annotations for 560 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu