SpaceNet, launched in August 2016 as an open innovation project offering a repository of freely available imagery with co-registered map features. Before SpaceNet, computer vision researchers had minimal options to obtain free, precision-labeled, and high-resolution satellite imagery. Today, SpaceNet hosts datasets developed by its own team, along with data sets from projects like IARPA’s Functional Map of the World (fMoW).
SpaceNet 2: Building Detection v2 - is a dataset for building footprint detection in geographically diverse settings from very high resolution satellite images. It contains over 302,701 building footprints, 3/8-band Worldview-3 satellite imagery at 0.3m pixel res., across 5 cities (Rio de Janeiro, Las Vegas, Paris, Shanghai, Khartoum), and covers areas that are both urban and suburban in nature. The dataset was split using 60%/20%/20% for train/test/validation.
The main use case for the detection of building footprints from satellite imagery is to aid foundational mapping.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Spacenet is a dataset for classification tasks - it contains Spacenet annotations for 9,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Satellite imagery analytics have numerous human development and disaster response applications, particularly when time series methods are involved. For example, quantifying population statistics is fundamental to 67 of the 232 United Nations Sustainable Development Goals, but the World Bank estimates that more than 100 countries currently lack effective Civil Registration systems. The SpaceNet 7 Multi-Temporal Urban Development Challenge aims to help address this deficit and develop novel computer vision methods for non-video time series data. In this challenge, participants will identify and track buildings in satellite imagery time series collected over rapidly urbanizing areas. The competition centers around a new open source dataset of Planet satellite imagery mosaics, which includes 24 images (one per month) covering ~100 unique geographies. The dataset will comprise over 40,000 square kilometers of imagery and exhaustive polygon labels of building footprints in the imagery, totaling over 10 million individual annotations. Challenge participants will be asked to track building construction over time, thereby directly assessing urbanization.
The use of machine learning for remote sensing has matured alongside an increase in the availability and resolution of satellite imagery, enabling advances in such tasks as land use classification, natural risk estimation, disaster damage assessment, and agricultural forecasting.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Discover the SpaceNet Comprehensive Astronomical Dataset, featuring 12,900 high-resolution images of planets, galaxies, asteroids, nebulae, comets, black holes, stars, and constellations.
This dataset was created by oliver
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset is based on the original SpaceNet 7 dataset, with a few modifications.
The original dataset consisted of Planet satellite imagery mosaics, which includes 24 images (one per month) covering ~100 unique geographies. The original dataset will comprised over 40,000 square kilometers of imagery and exhaustive polygon labels of building footprints in the imagery, totaling over 10 million individual annotations.
This dataset builds upon the original dataset, such that each image is segmented into 64 x 64 chips, in order to make it easier to build a model for.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4101651%2F66851650dbfb7017f1c5717af16cea3c%2Fchips.png?generation=1607947381793575&alt=media" alt="">
The images also compare the changes that between each image of each month, such that an image taken in month 1 is compared with the image take in month 2, 3, ... 24. This is done by taking the cartesian product of the differences between each image. For more information on how this is done check out the following notebook.
The differences between the images are captured in the output mask, and the 2 images being compared are stacked. Which means that our input images have dimensions of 64 x 64 x 6, and our output mask has dimensions 64 x 64 x 1. The reason our input images have 6 dimensions is because as mentioned earlier, they are 2 images stacked together. See image below for more details:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4101651%2F9cdcf8481d8d81b6d3fed072cea89586%2Fdifference.png?generation=1607947852597860&alt=media" alt="">
The image above shows the masks for each of the original satellite images and what the difference between the 2 looks like. For more information on how the original data was explored check out this notebook.
The data is structured as follows:
chip_dataset
└── change_detection
└── fname
├── chips
│ └── year1_month1_year2_month2
│ └── global_monthly_year1_month1_year2_month2_chip_x###_y###_fname.tif
└── masks
└── year1_month1_year2_month2
└── global_monthly_year1_month1_year2_month2_chip_x###_y###_fname_blank.tif
The _blank
in the mask chips, indicates whether the mask is a blank mask or not.
For more information on how the data was structured and augmented check out the following notebook.
All credit goes to the team at SpaceNet for collecting and annotating and formatting the original dataset.
An open source Multi-View Overhead Imagery dataset with 27 unique looks from a broad range of viewing angles (-32.5 degrees to 54.0 degrees). Each of these images cover the same 665 square km geographic extent and are annotated with 126,747 building footprint labels, enabling direct assessment of the impact of viewpoint perturbation on model performance.
Description:
SpaceNet: A Comprehensive Astronomical Dataset, obtained via a novel double-stage augmentation framework called FLARE is a hierarchically structured and high-quality astronomical image dataset. It is meticulously designed for both fine-grained and macro classification tasks. Comprising approximately 12,900 samples, SpaceNet incorporates lower (LR) to higher resolution (HR) conversion with standard augmentations and a diffusion approach for synthetic sample generation. This comprehensive dataset enables superior generalization on various recognition tasks, including classification.
Download Dataset
Key Features
High-Resolution Images: The dataset includes high-quality images that facilitate accurate analysis and classification.
Hierarchical Structure: The dataset is hierarchically organized to support both macro and fine-grained classification tasks.
Advanced Augmentation Techniques: Utilizes FLARE framework for double-stage augmentation, enhancing the dataset’s diversity and robustness.
Synthetic Sample Generation: Employs a diffusion approach to create synthetic samples, boosting the dataset’s size and variability.
Usage
SpaceNet is ideal for:
Training and Evaluation: Developing and testing machine learning models for fine-grained and macro astronomical classification tasks.
Research: Exploring hierarchical classification approaches within the astronomy domain.
Model Development: Creating robust models capable of generalizing across both in-domain and out-of-domain datasets.
Educational Purposes: Providing a rich dataset for educational projects in astronomy and machine learning.
This dataset is sourced from Kaggle.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Monthly mean Sentinel-1 SAR and cloud-free Sentinel-2 MSI images for the SpaceNet 7 training and test sites. Our dataset also includes monthly rasterized built-up area labels for the 60 training sites.
Alcif36001/roof-spacenet-mini dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
OverviewThe dataset contains fully annotated electric transmission and distribution infrastructure for approximately 321 sq km of high resolution satellite and aerial imagery from around the world. The imagery and associated infrastructure annotations span 14 cities and 5 continents, and were selected to represent diversity in human settlement density (i.e. rural vs urban), terrain type, and development index. This dataset may be of particular interest to those looking to train machine learning algorithms to automatically identify energy infrastructure in satellite imagery or for those working on domain adaptation for computer vision. Automated algorithms for identifying electricity infrastructure in satellite imagery may assist policy makers identify the best pathway to electrification for unelectrified areas.Data SourcesThis dataset contains data sourced from the LINZ Data Service licensed for reuse under CC BY 4.0. This dataset also contained extracts from the SpaceNet dataset:SpaceNet on Amazon Web Services (AWS). “Datasets.” The SpaceNet Catalog. Last modified April 30, 2018 (link below).Other imagery data included in this dataset are from the Connecticut Department of Energy and Environmental Protection and the U.S. Geological Survey. Links to each of the imagery data sources are provided below as well as the link to the annotation tool and the github repository that provides tools for using these data.AcknowledgementsThis dataset was created as part of the Duke University Data+ project, "Energy Infrastructure Map of the World" (link below) in collaboration with the Information Initiative at Duke and the Duke University Energy Initiative.
Description:
SpaceNet is a hierarchically structured and high-quality astronomical image dataset, created using a novel double-stage augmentation process. This dataset, comprising approximately 12,900 images, is designed for both fine-grained and macro classification tasks. SpaceNet incorporates a range of resolutions from lower (LR) to higher resolution (HR) images, using standard augmentations and a diffusion approach for generating synthetic samples. This allows for superior generalization across various recognition tasks such as classification. The dataset also includes diverse celestial objects, making it a valuable resource for both academic research and practical applications in astronomy and astrophysics.
Download Dataset
Dataset Structure:
Fine-Grained Classes: The dataset includes 8 distinct classes: planets, galaxies, asteroids, nebulae, comets, black holes, stars, and constellations.
Dataset Composition:
Total Samples: Approximately 12,900 images
Fine-Grained Class Distribution:
Asteroid: 283 images
Black Hole: 656 images
Comet: 416 images
Constellation: 1,552 images
Galaxy: 3,984 images
Nebula: 1,192 images
Planet: 1,472 images
Star: 3,269 images
Usage: SpaceNet is ideal for:
Training and evaluating machine learning models on fine-grained and macro astronomical classification tasks.
Conducting research on hierarchical classification methods within the astronomy field.
Developing robust models that demonstrate excellent generalization across both in-domain and out-of-domain datasets.
This dataset is sourced from Kaggle.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Detailed information about the IP Maintainer SPACENET-MNT.
This chipped training dataset is over Shanghai and includes 30cm high-resolution imagery (.tif format) and corresponding building footprint vector labels (.geojson format) in 256 x 256 or smaller pixel tile/label pairs. This dataset is a ramp Tier 1 dataset, meaning it has been thoroughly reviewed and improved. This dataset was used in developing the ramp baseline model and contains 3,574 tiles and 7,118 buildings. The original dataset was sourced from the SpaceNet 2 Dataset before the imagery was tiled down from 650x650 pixel chips and labels were revised to be consistent with the ramp datasets notion of rooftop as the building footprint. Dataset keywords: Urban, Dense.
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Explore the historical Whois records related to spacenet.security (Domain). Get insights into ownership history and changes over time.
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Explore the historical Whois records related to spacenet.online (Domain). Get insights into ownership history and changes over time.
This chipped training dataset is over Paris and includes 30cm high-resolution imagery (.tif format) and corresponding building footprint vector labels (.geojson format) in 256 x 256 or smaller pixel tile/label pairs. This dataset is a ramp Tier 1 dataset, meaning it has been thoroughly reviewed and improved. This dataset was used in developing the ramp baseline model and contains 1,027 tiles and 3,468 buildings. The original dataset was sourced from the SpaceNet 2 Dataset before the imagery was tiled down from 650x650 pixel chips and labels were revised to be consistent with the ramp datasets notion of rooftop as the building footprint. Dataset keywords: Urban, Dense.
The net cash of Sidus Space with headquarters in the United States amounted to -15.83 million U.S. dollars in 2024. The reported fiscal year ends on December 31.Compared to the earliest depicted value from 2020 this is a total decrease by approximately 14.24 million U.S. dollars. The trend from 2020 to 2024 shows, however, that this decrease did not happen continuously.
SpaceNet, launched in August 2016 as an open innovation project offering a repository of freely available imagery with co-registered map features. Before SpaceNet, computer vision researchers had minimal options to obtain free, precision-labeled, and high-resolution satellite imagery. Today, SpaceNet hosts datasets developed by its own team, along with data sets from projects like IARPA’s Functional Map of the World (fMoW).