9 datasets found
  1. Z

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7576897
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images

    These Residual-UNet model data are based on the DeepGlobe dataset

    Models have been created using Segmentation Gym* using the following dataset**: https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset

    Image size used by model: 512 x 512 x 3 pixels

    classes: 1. urban 2. agricultural 3. rangeland 4. forest 5. water 6. bare 7. unknown

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    References *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    **Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D. and Raskar, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 172-181).

  2. DeepGlobe Land Cover Classification Challenge

    • kaggle.com
    zip
    Updated Jul 2, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    hehe (2019). DeepGlobe Land Cover Classification Challenge [Dataset]. https://www.kaggle.com/bhaikopath/deepglobe-land-cover-classification-challenge
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Jul 2, 2019
    Authors
    hehe
    Description

    Dataset

    This dataset was created by hehe

    Released under Data files © Original Authors

    Contents

  3. DeepGlobe

    • opendatalab.com
    • paperswithcode.com
    zip
    Updated Mar 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DigitalGlobe (2018). DeepGlobe [Dataset]. https://opendatalab.com/OpenDataLab/DeepGlobe
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 13, 2018
    Dataset provided by
    DigitalGlobehttp://www.digitalglobe.com/
    Wageningen University
    Facebook
    License

    http://deepglobe.org/resources.htmlhttp://deepglobe.org/resources.html

    Description

    We observe that satellite imagery is a powerful source of information as it contains more structured and uniform data, compared to traditional images. Although computer vision community has been accomplishing hard tasks on everyday image datasets using deep learning, satellite images are only recently gaining attention for maps and population analysis. This workshop aims at bringing together a diverse set of researchers to advance the state-of-the-art in satellite image analysis.

    To direct more attention to such approaches, we propose DeepGlobe Satellite Image Understanding Challenge, structured around three different satellite image understanding tasks. The datasets created and released for this competition may serve as reference benchmarks for future research in satellite image analysis. Furthermore, since the challenge tasks will involve "in the wild" forms of classic computer vision problems, these datasets have the potential to become valuable testbeds for the design of robust vision algorithms, beyond the area of remote sensing.

  4. Accuracy comparisons in form of mIoU/OA on test set.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xin Li; Hejing Zhao; Dan Wu; Qixing Liu; Rui Tang; Linyang Li; Zhennan Xu; Xin Lyu (2024). Accuracy comparisons in form of mIoU/OA on test set. [Dataset]. http://doi.org/10.1371/journal.pone.0301134.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Xin Li; Hejing Zhao; Dan Wu; Qixing Liu; Rui Tang; Linyang Li; Zhennan Xu; Xin Lyu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accuracy comparisons in form of mIoU/OA on test set.

  5. f

    Comparative methods.

    • plos.figshare.com
    xls
    Updated May 14, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xin Li; Hejing Zhao; Dan Wu; Qixing Liu; Rui Tang; Linyang Li; Zhennan Xu; Xin Lyu (2024). Comparative methods. [Dataset]. http://doi.org/10.1371/journal.pone.0301134.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Xin Li; Hejing Zhao; Dan Wu; Qixing Liu; Rui Tang; Linyang Li; Zhennan Xu; Xin Lyu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Land cover classification (LCC) is of paramount importance for assessing environmental changes in remote sensing images (RSIs) as it involves assigning categorical labels to ground objects. The growing availability of multi-source RSIs presents an opportunity for intelligent LCC through semantic segmentation, offering a comprehensive understanding of ground objects. Nonetheless, the heterogeneous appearances of terrains and objects contribute to significant intra-class variance and inter-class similarity at various scales, adding complexity to this task. In response, we introduce SLMFNet, an innovative encoder-decoder segmentation network that adeptly addresses this challenge. To mitigate the sparse and imbalanced distribution of RSIs, we incorporate selective attention modules (SAMs) aimed at enhancing the distinguishability of learned representations by integrating contextual affinities within spatial and channel domains through a compact number of matrix operations. Precisely, the selective position attention module (SPAM) employs spatial pyramid pooling (SPP) to resample feature anchors and compute contextual affinities. In tandem, the selective channel attention module (SCAM) concentrates on capturing channel-wise affinity. Initially, feature maps are aggregated into fewer channels, followed by the generation of pairwise channel attention maps between the aggregated channels and all channels. To harness fine-grained details across multiple scales, we introduce a multi-level feature fusion decoder with data-dependent upsampling (MLFD) to meticulously recover and merge feature maps at diverse scales using a trainable projection matrix. Empirical results on the ISPRS Potsdam and DeepGlobe datasets underscore the superior performance of SLMFNet compared to various state-of-the-art methods. Ablation studies affirm the efficacy and precision of SAMs in the proposed model.

  6. P

    EuroSAT Dataset

    • paperswithcode.com
    • opendatalab.com
    • +1more
    Updated Feb 15, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Patrick Helber; Benjamin Bischke; Andreas Dengel; Damian Borth (2022). EuroSAT Dataset [Dataset]. https://paperswithcode.com/dataset/eurosat
    Explore at:
    Dataset updated
    Feb 15, 2022
    Authors
    Patrick Helber; Benjamin Bischke; Andreas Dengel; Damian Borth
    Description

    Eurosat is a dataset and deep learning benchmark for land use and land cover classification. The dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes with in total 27,000 labeled and geo-referenced images.

  7. Data from: A Comprehensive Analysis of Weakly-Supervised Semantic...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jun 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lyndon Chan; Lyndon Chan; Mahdi S. Hosseini; Konstantinos N. Plataniotis; Mahdi S. Hosseini; Konstantinos N. Plataniotis (2020). A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains [Dataset]. http://doi.org/10.5281/zenodo.3902506
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 21, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Lyndon Chan; Lyndon Chan; Mahdi S. Hosseini; Konstantinos N. Plataniotis; Mahdi S. Hosseini; Konstantinos N. Plataniotis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Content

    This repository contains pre-trained computer vision models, data labels, and images used in the pre-print publication "A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains":

    1. ADPdevkit: a folder containing the 50 validation ("tuning") set and 50 evaluation ("segtest") set of images from the Atlas of Digital Pathology database formatted in the VOC2012 style--the full database of 17,668 images is available for download from the original website
    2. VOCdevkit: a folder containing the relevant files for the PASCAL VOC2012 Segmentation dataset, with both the trainaug and test sets
    3. DGdevkit: a folder containing the 803 test images of the DeepGlobe Land Cover challenge dataset formatted in the VOC2012 style
    4. cues: a folder containing the pre-generated weak cues for ADP, VOC2012, and DeepGlobe datasets, as required for the SEC and DSRG methods
    5. models_cnn: a folder containing the pre-trained CNN models
    6. models_wsss: a folder containing the pre-trained SEC, DSRG, and IRNet models, along with dense CRF settings

    More information

    For more information, please refer to the following article. Please cite this article when using the data set.

    @misc{chan2019comprehensive,
    title={A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains},
    author={Lyndon Chan and Mahdi S. Hosseini and Konstantinos N. Plataniotis},
    year={2019},
    eprint={1912.11186},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
    }

    For the full code released on GitHub, please visit the repository at: https://github.com/lyndonchan/wsss-analysis

    Contact

    For questions, please contact:
    Lyndon Chan
    lyndon.chan@mail.utoronto.ca
    http://orcid.org/0000-0002-1185-7961

  8. Pretraining data of SkySense++

    • zenodo.org
    bin
    Updated Mar 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kang Wu; Kang Wu (2025). Pretraining data of SkySense++ [Dataset]. http://doi.org/10.5281/zenodo.14994430
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 18, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kang Wu; Kang Wu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 9, 2024
    Description

    This repository contains the data description and processing for the paper titled "SkySense++: A Semantic-Enhanced Multi-Modal Remote Sensing Foundation Model for Earth Observation." The code is in here

    📢 Latest Updates

    🔥🔥🔥 Last Updated on 2024.03.14 🔥🔥🔥

    Pretrain Data

    RS-Semantic Dataset

    We conduct semantic-enhanced pretraining on the RS-Semantic dataset, which consists of 13 datasets with pixel-level annotations. Below are the specifics of these datasets.

    DatasetModalitiesGSD(m)SizeCategoriesDownload Link
    Five Billion PixelsGaofen-246800x720024Download
    PotsdamAirborne0.056000x60005Download
    VaihingenAirborne0.052494x20645Download
    DeepglobeWorldView0.52448x24486Download
    iSAIDMultiple Sensors-800x800 to 4000x1300015Download
    LoveDASpaceborne0.31024x10247Download
    DynamicEarthNetWorldView0.31024x10247Download
    Sentinel-2*1032x32
    Sentinel-1*1032x33
    Pastis-MMWorldView0.31024x102418Download
    Sentinel-2*1032x32
    Sentinel-1*1032x33
    C2Seg-ABSentinel-2*10128x12813Download
    Sentinel-1*10128x128
    FLAIRSpot-50.2512x51212Download
    Sentinel-2*1040x40
    DFC20Sentinel-210256x2569Download
    Sentinel-110256x256
    S2-naipNAIP1512x51232Download
    Sentinel-2*1064x64
    Sentinel-1*1064x64
    JL-16Jilin-10.72512x51216Download
    Sentinel-1*1040x40

    * for time-series data.

    EO Benchmark

    We evaluate our SkySense++ on 12 typical Earth Observation (EO) tasks across 7 domains: agriculture, forestry, oceanography, atmosphere, biology, land surveying, and disaster management. The detailed information about the datasets used for evaluation is as follows.

    DomainTask typeDatasetModalitiesGSDImage sizeDownload LinkNotes
    AgricultureCrop classificationGermanySentinel-2*1024x24Download
    ForesetryTree species classificationTreeSatAI-Time-SeriesAirborne,0.2304x304Download
    Sentinel-2*106x6
    Sentinel-1*106x6
    Deforestation segmentationAtlanticSentinel-210512x512Download
    OceanographyOil spill segmentationSOSSentinel-110256x256Download
    AtmosphereAir pollution regression3pollutionSentinel-210200x200Download
    Sentinel-5P2600120x120
    BiologyWildlife detectionKenyaAirborne-3068x4603Download
    Land surveyingLULC mappingC2Seg-BWGaofen-610256x256Download
    Gaofen-310256x256
    Change detectiondsifn-cdGoogleEarth0.3512x512Download
    Disaster managementFlood monitoringFlood-3iAirborne0.05256 × 256Download
    C2SMSFloodsSentinel-2, Sentinel-110512x512Download
    Wildfire monitoringCABUARSentinel-2105490 × 5490Download
    Landslide mappingGVLMGoogleEarth0.31748x1748 ~ 10808x7424Download
    Building damage assessmentxBDWorldView0.31024x1024Download

    * for time-series data.

  9. f

    Quantitative evaluation of results on DeepGlobe and Massachusetts dataset.

    • plos.figshare.com
    xls
    Updated Jul 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wei Lu; Xiaoying Shi; Zhiping Lu (2024). Quantitative evaluation of results on DeepGlobe and Massachusetts dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0305933.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 18, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Wei Lu; Xiaoying Shi; Zhiping Lu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Massachusetts
    Description

    Quantitative evaluation of results on DeepGlobe and Massachusetts dataset.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7576897

Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images

Explore at:
Dataset updated
Jul 12, 2024
Dataset authored and provided by
Buscombe, Daniel
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images

These Residual-UNet model data are based on the DeepGlobe dataset

Models have been created using Segmentation Gym* using the following dataset**: https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset

Image size used by model: 512 x 512 x 3 pixels

classes: 1. urban 2. agricultural 3. rangeland 4. forest 5. water 6. bare 7. unknown

File descriptions

For each model, there are 5 files with the same root name:

  1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

  2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

  3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

  4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

  5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

References *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

**Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D. and Raskar, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 172-181).

Search
Clear search
Close search
Google apps
Main menu