100+ datasets found
  1. UNet[Training model] tracking with WandB

    • kaggle.com
    Updated Jun 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jitendra Sharma (2022). UNet[Training model] tracking with WandB [Dataset]. https://www.kaggle.com/datasets/jitensharma597/unettraining-model-tracking-with-wandb
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 3, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jitendra Sharma
    Description

    Dataset

    This dataset was created by Jitendra Sharma

    Contents

  2. Training Data - Plant root segmentation using 3D-Unet

    • zenodo.org
    bin, tiff
    Updated Nov 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Harwood; Richard Harwood (2024). Training Data - Plant root segmentation using 3D-Unet [Dataset]. http://doi.org/10.5281/zenodo.14183802
    Explore at:
    tiff, binAvailable download formats
    Dataset updated
    Nov 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Richard Harwood; Richard Harwood
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 24, 2024
    Description

    CT data and corresponding mask images used to train 3D-Unet

  3. i

    CT TRAINING AND VALIDATION SERIES FOR 3D AUTOMATED SEGMENTATION OF INNER EAR...

    • ieee-dataport.org
    Updated Oct 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Lim (2023). CT TRAINING AND VALIDATION SERIES FOR 3D AUTOMATED SEGMENTATION OF INNER EAR USING U-NET ARCHITECTURE DEEP-LEARNING MODEL [Dataset]. https://ieee-dataport.org/documents/ct-training-and-validation-series-3d-automated-segmentation-inner-ear-using-u-net
    Explore at:
    Dataset updated
    Oct 19, 2023
    Authors
    Jonathan Lim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set contains: - Training dataset: 271 CT-scans of inner ears used for optimization and training of the model. - Validation dataset: 70 CT-scans of inner ears used for external validation. - U-net architecture deep-learning model's weight after optimized training. - All manual segmentations performed for both datasets. - All post-processed automated segmentations performed by the model for bothd atasets.

  4. U-Net Training Area Datasets

    • figshare.com
    zip
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaun Williams (2025). U-Net Training Area Datasets [Dataset]. http://doi.org/10.6084/m9.figshare.28746281.v6
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 15, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Shaun Williams
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Download our Sentinel-2 imagery dataset to facilitate your model training processes. This high-resolution, multispectral collection is ideal for applications in emergency response, transportation, and wildfire detection. This training area covers the communities of Corona, Ventura, and Pala Mesa California.RS_Data_MetaData contains the exact subset of imagery used for our Wildfire Threat Detection for Transportation Infrastructure using U-Net for Semantic Segmentation notebook.

  5. Z

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for CoastTrain...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for CoastTrain water/other segmentation of RGB 768x768 orthomosaic images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7574783
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for CoastTrain water/other segmentation of RGB 768x768 orthomosaic images

    These Residual-UNet model data are based on Coast Train images and associated labels. https://coasttrain.github.io/CoastTrain/docs/Version%201:%20March%202022/data

    Models have been created using Segmentation Gym* using the following dataset**: https://doi.org/10.1038/s41597-023-01929-2

    Image size used by model: 768 x 768 x 3 pixels

    classes: 1. Water 2. Other

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    References *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    **Buscombe, D., Wernette, P., Fitzpatrick, S. et al. A 1.2 Billion Pixel Human-Labeled Dataset for Data-Driven Classification of Coastal Environments. Sci Data 10, 46 (2023). https://doi.org/10.1038/s41597-023-01929-2

  6. UWMGI: Unet [Train] [PyTorch] ds

    • kaggle.com
    Updated Apr 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Awsaf (2022). UWMGI: Unet [Train] [PyTorch] ds [Dataset]. https://www.kaggle.com/datasets/awsaf49/uwmgi-unet-train-pytorch-ds/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 27, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Awsaf
    Description

    Dataset

    This dataset was created by Awsaf

    Contents

  7. Z

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for FloodNet/10-class...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for FloodNet/10-class segmentation of RGB 768x512 UAV images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7566809
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for FloodNet/10-class segmentation of RGB 768x512 UAV images

    These Residual-UNet model data are based on FloodNet images and associated labels.

    Models have been created using Segmentation Gym* using the following dataset**: https://github.com/BinaLab/FloodNet-Challenge-EARTHVISION2021

    Image size used by model: 768 x 512 x 3 pixels

    classes: 1. Background 2. Building-flooded 3. Building-non-flooded 4. Road-flooded 5. Road-non-flooded 6. Water 7. Tree 8. Vehicle 9. Pool 10. Grass

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    images.zip and labels.zip contain the images and labels, respectively, used to train the model

    References *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ** Rahnemoonfar, M., Chowdhury, T., Sarkar, A., Varshney, D., Yari, M. and Murphy, R.R., 2021. Floodnet: A high resolution aerial imagery dataset for post flood scene understanding. IEEE Access, 9, pp.89644-89654.

  8. Fully annotated Human Breast carcinoma cells for 3D segmentation training

    • zenodo.org
    • explore.openaire.eu
    zip
    Updated Jun 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Varun Kapoor; Varun Kapoor (2022). Fully annotated Human Breast carcinoma cells for 3D segmentation training [Dataset]. http://doi.org/10.5281/zenodo.5904082
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 21, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Varun Kapoor; Varun Kapoor
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Fully annotated dataset for training of 3D segmentation models. We provide the Raw image patches in the Raw directory and instance segmentation labels in the RealMask directory, the semantic segmentation masks are provided in the BinaryMask directory. Manually curated from originally published dataset over here: http://celltrackingchallenge.net/3d-datasets/ by team at Kapoorlabs.

  9. Z

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for OpenEarthMap/9-class segmentation of RGB 512x512 high-res. images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7576893
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for OpenEarthMap/9-class segmentation of RGB 512x512 high-res. images

    These Residual-UNet model data are based on the OpenEarthMap dataset

    Models have been created using Segmentation Gym* using the following dataset**: https://zenodo.org/record/7223446#.Y9gtWHbMIuV

    Image size used by model: 512 x 512 x 3 pixels

    classes: 1. bareland 2. rangeland 3. development 4. road 5. tree 6. water 7. agricultural 8. building 9. nodata

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    References *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    **Xia, Yokoya, Adriano, & Broni-Bediako. (2022). OpenEarthMap: A Benchmark Dataset for Global High-Resolution Land Cover Mapping [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7223446

  10. f

    Training set for label-free nuclei segmentation

    • figshare.com
    application/gzip
    Updated Nov 23, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Allan Sauvat (2020). Training set for label-free nuclei segmentation [Dataset]. http://doi.org/10.6084/m9.figshare.13273277.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Nov 23, 2020
    Dataset provided by
    figshare
    Authors
    Allan Sauvat
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An R object (list) containing two matrices (tensors) corresponding the untiled brightfield images and their computed mask

  11. Z

    ZeroCostDL4Mic / DeepBacs - Multi-label U-Net training dataset (Bacillus...

    • data.niaid.nih.gov
    Updated Jul 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Holden, Séamus (2024). ZeroCostDL4Mic / DeepBacs - Multi-label U-Net training dataset (Bacillus subtilis) and pretrained model [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5639252
    Explore at:
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Conduit, Mia
    Holden, Séamus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Training and test images of live B. subtilis cells expressing FtsZ-GFP for the task of segmentation.

    Additional information can be found on this github wiki.

    The example shows the fluorescence widefield image of live B. subtilis cells expressing FtsZ-GFP, the manually annotated instance segmentation mask and the corresponding 2-label semantic segmentation mask used for model training.

    Training and test dataset

    Data type: Paired fluorescence and segmented mask images

    Microscopy data type: 2D widefield images (fluorescence)

    Microscope: Custom-built 100x inverted microscope bearing a 100x TIRF objective (Nikon CFI Apochromat TIRF 100XC Oil); images were captured on a Prime BSI sCMOS camera (Teledyne Photometrics)

    Cell type: B. subtilis strain SH130 grown under agarose pads

    File format: .tiff (8-bit)

    Image size: 1024 x 1024 px² (Pixel size: 65 nm)

    Image preprocessing: Images were denoised using PureDenoise and resulting 32-bit images were converted into 8-bit images after normalizing to 1% and 99.98% percentiles. Images were manually annotated using the Labkit Fiji plugin and mask images with labeled cytosol and cell boundaries were created using a custom Fiji macro (see our github repository).

    Multi-label U-Net model:

    The U-Net (2D) multilabel model was generated using the ZeroCostDL4Mic platform (Chamier & Laine et al., 2021). It was trained from scratch for 200 epochs on 733 paired image patches (image dimensions: (1024 x 1024 px²), patch size: (256 x 256 px²)) with a batch size of 8 and a categorical_crossentrop loss function, using the U-Net (2D) multilabel ZeroCostDL4Mic notebook (v 1) (Chamier & Laine et al., 2021). Key python packages used include tensorflow (v 0.1.12), Keras (v 2.3.1), numpy (v 1.19.5), cuda (v 11.1.105). The training was accelerated using a Tesla P100GPU.

    Author(s): Mia Conduit1,2, Séamus Holden1,3

    Contact email: Seamus.Holden@newcastle.ac.uk

    Affiliation:

    1) Centre for Bacterial Cell Biology, Biosciences Institute, Newcastle University, NE2 4AX UK

    2) ORCID: 0000-0002-7169-907X

    Associated publications: Whitley et al., 2021, Nature Communications, https://doi.org/10.15252/embj.201696235

  12. Z

    Training dataset for semantic segmentation (U-Net) of structural...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 23, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vitor Souza Martins (2020). Training dataset for semantic segmentation (U-Net) of structural conservation practices [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3762369
    Explore at:
    Dataset updated
    Jul 23, 2020
    Dataset authored and provided by
    Vitor Souza Martins
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this research, the best management practices include vegetative/structural conservation practices (SCP) across crop fields, such as grassed waterways and terraces. This reference dataset includes 500,000 pair patches (false-color image (B1: NIR, B2: Red, B3: Green) and binary label (SCP: yes[1] or no[0]). These training samples were randomly extracted from Iowa BMP project (https://www.gis.iastate.edu/gisf/projects/conservation-practices) and present 90% of patches with SCP areas and 10% of patches non-SCP area. The patch dimension is 256 x 256 pixels at 2-m resolution. Due to the file size, the images were upload in different *.rar files (imagem_0_200k.rar, imagem_200_400k.rar, imagem_400_500k.rar), and the user should download all and merge them in the same folder. The corresponding labels are all in "class_bin.rar" file.

    Application: These pair images are useful for conservation practitioners interested in the classification of vegetative/structural SCPs using deep-learning semantic segmentation methods.

    Further information will be available in future.

  13. UWMGI:Train[Unet(inceptionresnetv2)][v1-Fold0]

    • kaggle.com
    Updated Jun 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pushpa Pandey (2022). UWMGI:Train[Unet(inceptionresnetv2)][v1-Fold0] [Dataset]. https://www.kaggle.com/datasets/pushpapandey/uwmgitrainunetinceptionresnetv2v1fold0
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 14, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Pushpa Pandey
    Description

    Dataset

    This dataset was created by Pushpa Pandey

    Contents

  14. m

    Membrane Segmentation using Unet with Grad-CAM based Heatmap

    • data.mendeley.com
    Updated Aug 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Duc Chung Tran (2020). Membrane Segmentation using Unet with Grad-CAM based Heatmap [Dataset]. http://doi.org/10.17632/6whw7rx6b6.1
    Explore at:
    Dataset updated
    Aug 14, 2020
    Authors
    Duc Chung Tran
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset visualizes Grad-CAM based heatmap applied on membrane segmentation results when using Unet. The training data are shown in "train" folder in which: - "checkpoint" folder: stores checkpoint files for 3 epochs: 100, 500, and 5,000. - "image" folder: holds training images - "label" folder: stores labelled membrane images The testing results are stored in "test_xxx" folders for 3 epochs: 100, 500 and 5,000.

  15. Z

    Pecha Line Segmentation Datasets

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tenzin (2020). Pecha Line Segmentation Datasets [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3244837
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset authored and provided by
    Tenzin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Training data for line segmentation of Tibetan woodblock prints. This version of dataset contain images randomly picked images from LOC scans of the derge kangyur.

    This dataset was trained with Unet Model using fastai library. The result was unsatisfactory due to insufficient data.

    Datasets contains:

    Images/ : Original pecha image

    Labels/ : Mask of each image

    valid.txt: list of validation filename

    code.txt: list of name of object

  16. Z

    Doodleverse/Segmentation Zoo Res-UNet model for NOAA ERI/4-class...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Goldstein, Evan B. (2024). Doodleverse/Segmentation Zoo Res-UNet model for NOAA ERI/4-class segmentation of RGB 512x512 images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7628732
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Goldstein, Evan B.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Residual-UNet model is trained on 1,179 pairs of human-generated segmentation labels and images from Emergency Response Imagery (ERI) collected by US National Oceanic and Atmospheric Administration (NOAA) after Hurricane Barry, Delta, Dorian, Florence, Ida, Laura, Michael, Sally, Zeta, and Tropical Storm Gordon. The dataset is available here: https://doi.org/10.5281/zenodo.7268082

    Models have been created using Segmentation Gym:

    Code - https://github.com/Doodleverse/segmentation_gym

    Paper - https://doi.org/10.1029/2022EA002332

    The model takes input images that are 512 x 512 x 3 pixels, and the output is 512 x 512 x 4, corresponding to 4 classes:

    water

    bare sediment

    vegetation

    development (roads, buildings, power lines, parking lots, etc.)

    Included here are 6 files with the same root name:

    '.json' config file: this is the file that was used by Segmentation Gym to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction.

    '.h5' weights file: this is the file that was created by the Segmentation Gym function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym function seg_images_in_folder.py.

    '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    '.zip' of the model in the Tensorflow ‘saved model’ format. It is created by the Segmentation Gym function utils/gen_saved_model.py

    '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

  17. n

    Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other)...

    • data.niaid.nih.gov
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7921970
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery

    This model release is part of the Doodleverse: https://github.com/Doodleverse

    These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.

    Models have been created using Segmentation Gym* using an as-yet unpublished dataset of images and associated label images. See https://github.com/Doodleverse for more information about how this model was trained, and how to use it for inference

    Classes: {0=other, 1=water}

    File descriptions

    There are two models; v7 has been trained from scratch, and v8 has been fine-tuned using hyperparameter adjustment. For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally,

    1. BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
    2. sample_images.zip contains a few example input files, for model testing

    References

    *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

  18. U-Net Pretrained Model

    • figshare.com
    zip
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaun Williams (2025). U-Net Pretrained Model [Dataset]. http://doi.org/10.6084/m9.figshare.28774712.v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Shaun Williams
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The model was trained on over 3,000 image-mask pairs using Sentinel-2 imagery and CAL FIRE’s Wildland Fire Threat layer, specifically from the 2017 Lilac Fire in San Diego County, California. The training data was curated from the Pala Mesa region, an area significantly impacted by wildfire activity and containing critical road and rail networks.The model outputs multi-class segmentation masks that classify areas into low-moderate, high, very high, and extreme wildfire threat categories which may support emergency preparedness and response and infrastructure risk analysis. This pretrained version allows users to test the model and generate inferences without requiring a long runtime for training, making it ideal for rapid evaluation, demonstration, or integration into spatial AI workflows.NoteThis model was trained on a region-specific dataset (Lilac Fire, 2017), and generalization to other fires or regions may require fine-tuning.Masks are aligned with CAL FIRE’s threat classification and were processed using 1,000-foot buffers around infrastructure features.Data provided by:California Department of Forestry and Fire Protection (CAL FIRE)European Space Agency (Sentinel-2 via Copernicus Program)U.S. Census Bureau (TIGER/Line Roads)Federal Railroad Administration / Bureau of Transportation Statistics

  19. Doodleverse/Segmentation Zoo Res-UNet models for 4-class (water, whitewater,...

    • zenodo.org
    bin, json, png, txt
    Updated Jul 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Buscombe; Daniel Buscombe (2024). Doodleverse/Segmentation Zoo Res-UNet models for 4-class (water, whitewater, sediment and other) segmentation of Sentinel-2 and Landsat-7/8 3-band (RGB) images of coasts. [Dataset]. http://doi.org/10.5281/zenodo.6950472
    Explore at:
    json, txt, png, binAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Buscombe; Daniel Buscombe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo Res-UNet models for 4-class (water, whitewater, sediment and other) segmentation of Sentinel-2 and Landsat-7/8 3-band (RGB) images of coasts.

    These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.

    Models have been created using Segmentation Gym* using the following dataset**: https://doi.org/10.5281/zenodo.7335647

    Classes: {0=water, 1=whitewater, 2=sediment, 3=other}

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function `seg_images_in_folder.py`. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function `train_model.py`

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function `train_model.py`

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    References

    *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ** Buscombe, Daniel, Goldstein, Evan, Bernier, Julie, Bosse, Stephen, Colacicco, Rosa, Corak, Nick, Fitzpatrick, Sharon, del Jesús González Guillén, Anais, Ku, Venus, Paprocki, Julie, Platt, Lindsay, Steele, Bethel, Wright, Kyle, & Yasin, Brandon. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7335647

  20. f

    U-Net GIS Data

    • figshare.com
    zip
    Updated Apr 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaun Williams (2025). U-Net GIS Data [Dataset]. http://doi.org/10.6084/m9.figshare.28755017.v3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2025
    Dataset provided by
    figshare
    Authors
    Shaun Williams
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The GIS database used in this project serves as a centralized repository for all spatial datasets required for wildfire threat analysis and model training. It includes CAL FIRE’s Wildland Fire Threat layer, which provides pixel-based classifications of wildfire potential across California, as well as transportation infrastructure layers, including primary and secondary roads and railways.To support impact analysis, 1,000-foot buffer zones were generated around each infrastructure feature to define zones of interest for wildfire segmentation. The database is structured for integration into both machine learning workflows and GIS environments, enabling seamless overlay, visualization, and spatial querying within platforms such as ArcGIS Pro or QGIS.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jitendra Sharma (2022). UNet[Training model] tracking with WandB [Dataset]. https://www.kaggle.com/datasets/jitensharma597/unettraining-model-tracking-with-wandb
Organization logo

UNet[Training model] tracking with WandB

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jun 3, 2022
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Jitendra Sharma
Description

Dataset

This dataset was created by Jitendra Sharma

Contents

Search
Clear search
Close search
Google apps
Main menu