Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images
These Residual-UNet model data are based on the [DeepGlobe dataset](https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset)
Models have been created using Segmentation Gym* using the following dataset**: https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset
Image size used by model: 512 x 512 x 3 pixels
classes:
1. urban
2. agricultural
3. rangeland
4. forest
5. water
6. bare
7. unknown
File descriptions
For each model, there are 5 files with the same root name:
1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function `seg_images_in_folder.py`. Models may be ensembled.
3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function `train_model.py`
5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function `train_model.py`
Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
References
*Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
**Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D. and Raskar, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 172-181).
This dataset was created by hehe
Released under Data files © Original Authors
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This dataset was created by huada huang
Released under Database: Open Database, Contents: © Original Authors
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the data description and processing for the paper titled "SkySense++: A Semantic-Enhanced Multi-Modal Remote Sensing Foundation Model for Earth Observation." The code is in here
🔥🔥🔥 Last Updated on 2025.03.14 🔥🔥🔥
We conduct semantic-enhanced pretraining on the RS-Semantic dataset, which consists of 13 datasets with pixel-level annotations. Below are the specifics of these datasets.
Dataset | Modalities | GSD(m) | Size | Categories | Download Link |
---|---|---|---|---|---|
Five Billion Pixels | Gaofen-2 | 4 | 6800x7200 | 24 | Download |
Potsdam | Airborne | 0.05 | 6000x6000 | 5 | Download |
Vaihingen | Airborne | 0.05 | 2494x2064 | 5 | Download |
Deepglobe | WorldView | 0.5 | 2448x2448 | 6 | Download |
iSAID | Multiple Sensors | - | 800x800 to 4000x13000 | 15 | Download |
LoveDA | Spaceborne | 0.3 | 1024x1024 | 7 | Download |
DynamicEarthNet | WorldView | 0.3 | 1024x1024 | 7 | Download |
Sentinel-2* | 10 | 32x32 | |||
Sentinel-1* | 10 | 32x33 | |||
Pastis-MM | WorldView | 0.3 | 1024x1024 | 18 | Download |
Sentinel-2* | 10 | 32x32 | |||
Sentinel-1* | 10 | 32x33 | |||
C2Seg-AB | Sentinel-2* | 10 | 128x128 | 13 | Download |
Sentinel-1* | 10 | 128x128 | |||
FLAIR | Spot-5 | 0.2 | 512x512 | 12 | Download |
Sentinel-2* | 10 | 40x40 | |||
DFC20 | Sentinel-2 | 10 | 256x256 | 9 | Download |
Sentinel-1 | 10 | 256x256 | |||
S2-naip | NAIP | 1 | 512x512 | 32 | Download |
Sentinel-2* | 10 | 64x64 | |||
Sentinel-1* | 10 | 64x64 | |||
JL-16 | Jilin-1 | 0.72 | 512x512 | 16 | Download |
Sentinel-1* | 10 | 40x40 |
* for time-series data.
We evaluate our SkySense++ on 12 typical Earth Observation (EO) tasks across 7 domains: agriculture, forestry, oceanography, atmosphere, biology, land surveying, and disaster management. The detailed information about the datasets used for evaluation is as follows.
Domain | Task type | Dataset | Modalities | GSD | Image size | Download Link | Notes |
---|---|---|---|---|---|---|---|
Agriculture | Crop classification | Germany | Sentinel-2* | 10 | 24x24 | Download | |
Foresetry | Tree species classification | TreeSatAI-Time-Series | Airborne, | 0.2 | 304x304 | Download | |
Sentinel-2* | 10 | 6x6 | |||||
Sentinel-1* | 10 | 6x6 | |||||
Deforestation segmentation | Atlantic | Sentinel-2 | 10 | 512x512 | Download | ||
Oceanography | Oil spill segmentation | SOS | Sentinel-1 | 10 | 256x256 | Download | |
Atmosphere | Air pollution regression | 3pollution | Sentinel-2 | 10 | 200x200 | Download | |
Sentinel-5P | 2600 | 120x120 | |||||
Biology | Wildlife detection | Kenya | Airborne | - | 3068x4603 | Download | |
Land surveying | LULC mapping | C2Seg-BW | Gaofen-6 | 10 | 256x256 | Download | |
Gaofen-3 | 10 | 256x256 | |||||
Change detection | dsifn-cd | GoogleEarth | 0.3 | 512x512 | Download | ||
Disaster management | Flood monitoring | Flood-3i | Airborne | 0.05 | 256 × 256 | Download | |
C2SMSFloods | Sentinel-2, Sentinel-1 | 10 | 512x512 | Download | |||
Wildfire monitoring | CABUAR | Sentinel-2 | 10 | 5490 × 5490 | Download | ||
Landslide mapping | GVLM | GoogleEarth | 0.3 | 1748x1748 ~ 10808x7424 | Download | ||
Building damage assessment | xBD | WorldView | 0.3 | 1024x1024 | Download |
* for time-series data.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for DeepGlobe/7-class segmentation of RGB 512x512 high-res. images
These Residual-UNet model data are based on the [DeepGlobe dataset](https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset)
Models have been created using Segmentation Gym* using the following dataset**: https://www.kaggle.com/datasets/balraj98/deepglobe-land-cover-classification-dataset
Image size used by model: 512 x 512 x 3 pixels
classes:
1. urban
2. agricultural
3. rangeland
4. forest
5. water
6. bare
7. unknown
File descriptions
For each model, there are 5 files with the same root name:
1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function `seg_images_in_folder.py`. Models may be ensembled.
3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function `train_model.py`
5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function `train_model.py`
Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
References
*Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
**Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D. and Raskar, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 172-181).