Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The DeepGlobe Land Cover Classification Dataset is widely used in remote sensing and computer vision for training machine learning models to classify different land cover types such as urban areas, forests, water, and agriculture.
Facebook
TwitterAutomatic categorization and segmentation of land cover is of great importance for sustainable development, autonomous agriculture, and urban planning. DeepGlobe Land Cover Classification Challenge introduces the challenge of automatic classification of land cover types. This problem is defined as a multi-class segmentation task to detect areas of urban, agriculture, rangeland, forest, water, barren, and unknown.
This dataset was obtained from Land Cover Classification Track in DeepGlobe Challenge . For more details on the dataset refer the related publication - DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images
Any work based on the dataset should cite:
@InProceedings{DeepGlobe18,
author = {Demir, Ilke and Koperski, Krzysztof and Lindenbaum, David and Pang, Guan and Huang, Jing and Basu, Saikat and Hughes, Forest and Tuia, Devis and Raskar, Ramesh},
title = {DeepGlobe 2018: A Challenge to Parse the Earth Through Satellite Images},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}
The DeepGlobe Land Cover Classification Challenge and hence, the dataset are governed by DeepGlobe Rules, The DigitalGlobe's Internal Use License Agreement, and Annotation License Agreement.
Each satellite image is paired with a mask image for land cover annotation. The mask is a RGB image with 7 classes of labels, using color-coding (R, G, B) as follows.
File names for satellite images and the corresponding mask image are id _sat.jpg and id _mask.png. id is a randomized integer.
Please note:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images
These Residual-UNet model data are based on the [DeepGlobe dataset](https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset)
Models have been created using Segmentation Gym* using the following dataset**: https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset
Image size used by model: 512 x 512 x 3 pixels
classes:
1. urban
2. agricultural
3. rangeland
4. forest
5. water
6. bare
7. unknown
File descriptions
For each model, there are 5 files with the same root name:
1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function `seg_images_in_folder.py`. Models may be ensembled.
3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function `train_model.py`
5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function `train_model.py`
Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
References
*Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
**Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D. and Raskar, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 172-181).
Facebook
TwitterIn disaster zones, especially in developing countries, maps and accessibility information are crucial for crisis response. DeepGlobe Road Extraction Challenge poses the challenge of automatically extracting roads and street networks from satellite images.
This dataset was obtained from Road Extraction Challenge Track in DeepGlobe Challenge . For more details on the dataset refer the related publication - DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images
Any work based on the dataset should cite:
@InProceedings{DeepGlobe18,
author = {Demir, Ilke and Koperski, Krzysztof and Lindenbaum, David and Pang, Guan and Huang, Jing and Basu, Saikat and Hughes, Forest and Tuia, Devis and Raskar, Ramesh},
title = {DeepGlobe 2018: A Challenge to Parse the Earth Through Satellite Images},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}
The DeepGlobe Road Extraction Challenge and hence, the dataset are governed by DeepGlobe Rules, The DigitalGlobe's Internal Use License Agreement, and Annotation License Agreement.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Basic Information of the DeepGlobe Land Cover Classification Dataset and COCO Dataset: Displaying dataset type, size, description, and their application in landscape design generation tasks.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Content
This repository contains pre-trained computer vision models, data labels, and images used in the pre-print publication "A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains":
ADPdevkit: a folder containing the 50 validation ("tuning") set and 50 evaluation ("segtest") set of images from the Atlas of Digital Pathology database formatted in the VOC2012 style--the full database of 17,668 images is available for download from the original website
VOCdevkit: a folder containing the relevant files for the PASCAL VOC2012 Segmentation dataset, with both the trainaug and test sets
DGdevkit: a folder containing the 803 test images of the DeepGlobe Land Cover challenge dataset formatted in the VOC2012 style
cues: a folder containing the pre-generated weak cues for ADP, VOC2012, and DeepGlobe datasets, as required for the SEC and DSRG methods
models_cnn: a folder containing the pre-trained CNN models
models_wsss: a folder containing the pre-trained SEC, DSRG, and IRNet models, along with dense CRF settings
More information
For more information, please refer to the following article. Please cite this article when using the data set.
@misc{chan2019comprehensive, title={A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains}, author={Lyndon Chan and Mahdi S. Hosseini and Konstantinos N. Plataniotis}, year={2019}, eprint={1912.11186}, archivePrefix={arXiv}, primaryClass={cs.CV} }
For the full code released on GitHub, please visit the repository at: https://github.com/lyndonchan/wsss-analysis
Contact
For questions, please contact: Lyndon Chan lyndon.chan@mail.utoronto.ca http://orcid.org/0000-0002-1185-7961
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
High-resolution remote sensing technology is an efficient and low-cost space-to-earth observation strategy, which can carry out simultaneous monitoring of large-scale areas. It has incomparable advantages over ground monitoring solutions. Traditional road extraction methods are mainly based on image processing techniques. These methods usually only use one or a few features of images, which is difficult to fully deal with the real situation of roads. This work proposes a two-steps network for the road extraction. First, we optimize a pix2pix model for image translation to obtain the required map style image. Images output by the optimized model is full of road features and can relief the occlusion issues. It can intuitively reflect information such as the position, shape and size of the road. After that, we propose a new FusionLinkNet model, which has a strong stability in the road information by fusing the DenseNet, ResNet and LinkNet. Experiments show that our accuracy and learning rate have been improved. The MIOU (Mean Intersection Over Union) value of the proposed model in road extraction is over 80% in both DeepGlobe and Massachusetts road dataset. The figures are available from https://github.com/jsit-luwei/training-dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
High-resolution remote sensing technology is an efficient and low-cost space-to-earth observation strategy, which can carry out simultaneous monitoring of large-scale areas. It has incomparable advantages over ground monitoring solutions. Traditional road extraction methods are mainly based on image processing techniques. These methods usually only use one or a few features of images, which is difficult to fully deal with the real situation of roads. This work proposes a two-steps network for the road extraction. First, we optimize a pix2pix model for image translation to obtain the required map style image. Images output by the optimized model is full of road features and can relief the occlusion issues. It can intuitively reflect information such as the position, shape and size of the road. After that, we propose a new FusionLinkNet model, which has a strong stability in the road information by fusing the DenseNet, ResNet and LinkNet. Experiments show that our accuracy and learning rate have been improved. The MIOU (Mean Intersection Over Union) value of the proposed model in road extraction is over 80% in both DeepGlobe and Massachusetts road dataset. The figures are available from https://github.com/jsit-luwei/training-dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
High-resolution remote sensing technology is an efficient and low-cost space-to-earth observation strategy, which can carry out simultaneous monitoring of large-scale areas. It has incomparable advantages over ground monitoring solutions. Traditional road extraction methods are mainly based on image processing techniques. These methods usually only use one or a few features of images, which is difficult to fully deal with the real situation of roads. This work proposes a two-steps network for the road extraction. First, we optimize a pix2pix model for image translation to obtain the required map style image. Images output by the optimized model is full of road features and can relief the occlusion issues. It can intuitively reflect information such as the position, shape and size of the road. After that, we propose a new FusionLinkNet model, which has a strong stability in the road information by fusing the DenseNet, ResNet and LinkNet. Experiments show that our accuracy and learning rate have been improved. The MIOU (Mean Intersection Over Union) value of the proposed model in road extraction is over 80% in both DeepGlobe and Massachusetts road dataset. The figures are available from https://github.com/jsit-luwei/training-dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Performance Comparison of CBS3-LandGen Model and Ablated Models on DeepGlobe and COCO Datasets: Impact of Module Removal on Generation Quality, Text Consistency, and Multi-Modal Fusion.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The DeepGlobe Land Cover Classification Dataset is widely used in remote sensing and computer vision for training machine learning models to classify different land cover types such as urban areas, forests, water, and agriculture.