This dataset was created by prosper chuks
PACE NetCDF images with 8 days.
"PACE's data will help us better understand how the ocean and atmosphere exchange carbon dioxide. In addition, it will reveal how aerosols might fuel phytoplankton growth in the surface ocean. Novel uses of PACE data will benefit our economy and society. For example, it will help identify the extent and duration of harmful algal blooms. PACE will extend and expand NASA's long-term observations of our living planet. By doing so, it will take Earth's pulse in new ways for decades to come."
PACE NetCFD images dataset: - source: https://oceancolor.gsfc.nasa.gov/l3/order/ - start date: 2024-03-05 - end date: 2024-10-05 - sensor: PACE-OCI - product: Phytoplankton Carbon
All rights, and licenses go to the original data provider: NASA
This data was collected during NASA space apps challenge 2024
This dataset was created by Rob WebsterGSI
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created for the primary use of making a data generator for the original dataset. This generator can be used for classifying the images and then detecting ships in the test images.
This dataset consists of 2 directories, no-ship and ship, with each containing images as specified in the original dataset.
The original dataset was made by Kaggle user rhammel and the link to the dataset is Ships in Satellite Imagery. The banner image was obtained from
Subtitle: "Mapping Satellite Images: A Comprehensive Dataset"
About Dataset:
This dataset is designed for the task of mapping satellite images to corresponding map representations using advanced techniques like pix2pix GANs. It is structured to facilitate training and validation for machine learning models, providing a robust foundation for image-to-image translation projects.
maps
This dataset is ideal for developing and testing models that perform image translation from satellite photos to map images, supporting various applications in remote sensing, urban planning, and geographic information systems (GIS).
This dataset was created by hehe
Released under Data files © Original Authors
This dataset was created by Mohammed
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by marci0903
Released under MIT
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All images were cropped from Google Maps and show the Algarve region in Portugal.
This dataset was created to further test Deep Learning models trained to detect swimming pools in satellite images. The idea was to have a "real" set of samples to test the limits of these models.
We ask you to cite this paper in any research based on this dataset!
It is composed of an images folder with 289 images in which 173 have swimming pools. 116 images are negative samples. The images have various sizes, zoom percentages and quality.
In the labels file, for each image with swimming pools in it there is a corresponding PASCAL VOC annotations file.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
https://vision.eng.au.dk/wp-content/uploads/2020/07/example_obs-1024x206-1024x206.jpg" alt="">
The CloudCast dataset contains 70080 cloud-labeled satellite images with 10 different cloud types corresponding to multiple layers of the atmosphere. The raw satellite images come from a satellite constellation in geostationary orbit centred at zero degrees longitude and arrive in 15-minute intervals from the European Organisationfor Meteorological Satellites (EUMETSAT). The resolution of these images is 3712 x 3712 pixels for the full-disk of Earth, which implies that every pixel corresponds to a space of dimensions 3x3km. This is the highest possible resolution from European geostationary satellites when including infrared channels. Some pre- and post-processing of the raw satellite images are also being done by EUMETSAT before being exposed to the public, such as removing airplanes. We collect all the raw multispectral satellite images and annotate them individually on a pixel-level using a segmentation algorithm. The full dataset then has a spatial resolution of 928 x 1530 pixels recorded with 15-min intervals for the period 2017-2018, where each pixel represents an area of 3×3 km. To enable standardized datasets for benchmarking computer vision methods, this includes a full-resolution gray-scaled dataset centered and projected dataset over Europe (128×128).
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
If you use this dataset in your research or elsewhere, please cite/reference the following paper: CloudCast: A Satellite-Based Dataset and Baseline for Forecasting Clouds
There are 24 folders in the dataset containing the following information:
| File | Definition | Note | | --- | --- | | X.npy | Numpy encoded array containing the actual 128x128 image with pixel values as labels, see below. | | | GEO.npz| Numpy array containing geo coordinates where the image was taken (latitude and longitude). | | | TIMESTAMPS.npy| Numpy array containing timestamps for each captured image. | Images are captured in 15-minute intervals. |
0 = No clouds or missing data 1 = Very low clouds 2 = Low clouds 3 = Mid-level clouds 4 = High opaque clouds 5 = Very high opaque clouds 6 = Fractional clouds 7 = High semitransparant thin clouds 8 = High semitransparant moderately thick clouds 9 = High semitransparant thick clouds 10 = High semitransparant above low or medium clouds
https://i.ibb.co/NFv55QW/cloudcast4.png" alt="">
https://i.ibb.co/3FhHzMT/cloudcast3.png" alt="">
https://i.ibb.co/9wCsJhR/cloudcast2.png" alt="">
https://i.ibb.co/9T5dbSH/cloudcast1.png" alt="">
This dataset was created by Levrex
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Description
This dataset consists of paired high-resolution (HR) and low-resolution (LR) satellite images designed for 4x super-resolution tasks. The images are organized into two directories:
All images are geographically aligned and cover the same regions, ensuring pixel-to-pixel correspondence between LR and HR pairs.
Recommended Dataset Split
To ensure robust model training and evaluation, we propose the following 75-15-10 split: - Training Set (75%) Used to train the super-resolution model - Validation Set (15%) Used for hyperparameter tuning - Test Set (10%) Reserved for final evaluation (unseen data to measure model generalization)
Split Methodology: - Stratified Sampling: If images represent diverse terrains (urban, rural, water), ensure each subset reflects this distribution. - Non-overlapping Regions: Prevent data leakage by splitting across geographically distinct areas (e.g., tiles from different zones).
S2Looking is a building change detection dataset that contains large-scale side-looking satellite images captured at varying off-nadir angles. The S2Looking dataset consists of 5,000 registered bitemporal image pairs (size of 1024*1024, 0.5 ~ 0.8 m/pixel) of rural areas throughout the world and more than 65,920 annotated change instances. We provide two label maps to separately indicate the newly built and demolished building regions for each sample in the dataset. We establish a benchmark task based on this dataset, i.e., identifying the pixel-level building changes in the bi-temporal images.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset is used in video tutorial here to enhance the quality of imagery: https://youtu.be/FepNl8FTrh4
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Detection of clouds is an important step in many remote sensing applications that are based on optical imagery. 95-Cloud dataset is an extensive dataset for this task to help researchers to evaluate their deep learning-based cloud segmentation models.
95-Cloud dataset is an extension of our previous 38-Cloud dataset. 95-Cloud has 57 more Landsat 8 scenes for "training" which are uploaded here. The rest of the training scene and the test scenes can be downloaded from here.
More information about the dataset can be found at: https://github.com/SorourMo/95-Cloud-An-Extension-to-38-Cloud-Dataset https://github.com/SorourMo/38-Cloud-A-Cloud-Segmentation-Dataset https://github.com/SorourMo/Cloud-Net-A-semantic-segmentation-CNN-for-cloud-detection
This dataset has been prepared by Laboratory for Robotics Vision (LRV) at School of Engineering Science, Simon Fraser University, Vancouver, Canada.
This dataset was created by SerhiiShchus
This dataset was created by ishiryish
Released under Data files © Original Authors
This dataset was created by Witold Nowogórski
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Joe Filfli
Released under MIT
http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
PASTIS is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite image time series. It is composed of 2433 one square kilometer-patches in the French metropolitan territory for which sequences of satellite observations are assembled into a four-dimensional spatio-temporal tensor. The dataset contains both semantic and instance annotations, assigning to each pixel a semantic label and an instance id. There is an official 5 fold split provided in the dataset's metadata.
This dataset was created by prosper chuks