Subtitle: "Mapping Satellite Images: A Comprehensive Dataset"
About Dataset:
This dataset is designed for the task of mapping satellite images to corresponding map representations using advanced techniques like pix2pix GANs. It is structured to facilitate training and validation for machine learning models, providing a robust foundation for image-to-image translation projects.
maps
This dataset is ideal for developing and testing models that perform image translation from satellite photos to map images, supporting various applications in remote sensing, urban planning, and geographic information systems (GIS).
Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
Learn state-of-the-art skills to build compelling, useful, and fun Web GIS apps easily, with no programming experience required.Building on the foundation of the previous three editions, Getting to Know Web GIS, fourth edition,features the latest advances in Esri’s entire Web GIS platform, from the cloud server side to the client side.Discover and apply what’s new in ArcGIS Online, ArcGIS Enterprise, Map Viewer, Esri StoryMaps, Web AppBuilder, ArcGIS Survey123, and more.Learn about recent Web GIS products such as ArcGIS Experience Builder, ArcGIS Indoors, and ArcGIS QuickCapture. Understand updates in mobile GIS such as ArcGIS Collector and AuGeo, and then build your own web apps.Further your knowledge and skills with detailed sections and chapters on ArcGIS Dashboards, ArcGIS Analytics for the Internet of Things, online spatial analysis, image services, 3D web scenes, ArcGIS API for JavaScript, and best practices in Web GIS.Each chapter is written for immediate productivity with a good balance of principles and hands-on exercises and includes:A conceptual discussion section to give you the big picture and principles,A detailed tutorial section with step-by-step instructions,A Q/A section to answer common questions,An assignment section to reinforce your comprehension, andA list of resources with more information.Ideal for classroom lab work and on-the-job training for GIS students, instructors, GIS analysts, managers, web developers, and other professionals, Getting to Know Web GIS, fourth edition, uses a holistic approach to systematically teach the breadth of the Esri Geospatial Cloud.AUDIENCEProfessional and scholarly. College/higher education. General/trade.AUTHOR BIOPinde Fu leads the ArcGIS Platform Engineering team at Esri Professional Services and teaches at universities including Harvard University Extension School. His specialties include web and mobile GIS technologies and applications in various industries. Several of his projects have won specialachievement awards. Fu is the lead author of Web GIS: Principles and Applications (Esri Press, 2010).Pub Date: Print: 7/21/2020 Digital: 6/16/2020 Format: Trade paperISBN: Print: 9781589485921 Digital: 9781589485938 Trim: 7.5 x 9 in.Price: Print: $94.99 USD Digital: $94.99 USD Pages: 490TABLE OF CONTENTSPrefaceForeword1 Get started with Web GIS2 Hosted feature layers and storytelling with GIS3 Web AppBuilder for ArcGIS and ArcGIS Experience Builder4 Mobile GIS5 Tile layers and on-premises Web GIS6 Spatial temporal data and real-time GIS7 3D web scenes8 Spatial analysis and geoprocessing9 Image service and online raster analysis10 Web GIS programming with ArcGIS API for JavaScriptPinde Fu | Interview with Esri Press | 2020-07-10 | 15:56 | Link.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context and Aim
Deep learning in Earth Observation requires large image archives with highly reliable labels for model training and testing. However, a preferable quality standard for forest applications in Europe has not yet been determined. The TreeSatAI consortium investigated numerous sources for annotated datasets as an alternative to manually labeled training datasets.
We found the federal forest inventory of Lower Saxony, Germany represents an unseen treasure of annotated samples for training data generation. The respective 20-cm Color-infrared (CIR) imagery, which is used for forestry management through visual interpretation, constitutes an excellent baseline for deep learning tasks such as image segmentation and classification.
Description
The data archive is highly suitable for benchmarking as it represents the real-world data situation of many German forest management services. One the one hand, it has a high number of samples which are supported by the high-resolution aerial imagery. On the other hand, this data archive presents challenges, including class label imbalances between the different forest stand types.
The TreeSatAI Benchmark Archive contains:
50,381 image triplets (aerial, Sentinel-1, Sentinel-2)
synchronized time steps and locations
all original spectral bands/polarizations from the sensors
20 species classes (single labels)
12 age classes (single labels)
15 genus classes (multi labels)
60 m and 200 m patches
fixed split for train (90%) and test (10%) data
additional single labels such as English species name, genus, forest stand type, foliage type, land cover
The geoTIFF and GeoJSON files are readable in any GIS software, such as QGIS. For further information, we refer to the PDF document in the archive and publications in the reference section.
Version history
v1.0.0 - First release
Citation
Ahlswede et al. (in prep.)
GitHub
Full code examples and pre-trained models from the dataset article (Ahlswede et al. 2022) using the TreeSatAI Benchmark Archive are published on the GitHub repositories of the Remote Sensing Image Analysis (RSiM) Group (https://git.tu-berlin.de/rsim/treesat_benchmark). Code examples for the sampling strategy can be made available by Christian Schulz via email request.
Folder structure
We refer to the proposed folder structure in the PDF file.
Folder “aerial” contains the aerial imagery patches derived from summertime orthophotos of the years 2011 to 2020. Patches are available in 60 x 60 m (304 x 304 pixels). Band order is near-infrared, red, green, and blue. Spatial resolution is 20 cm.
Folder “s1” contains the Sentinel-1 imagery patches derived from summertime mosaics of the years 2015 to 2020. Patches are available in 60 x 60 m (6 x 6 pixels) and 200 x 200 m (20 x 20 pixels). Band order is VV, VH, and VV/VH ratio. Spatial resolution is 10 m.
Folder “s2” contains the Sentinel-2 imagery patches derived from summertime mosaics of the years 2015 to 2020. Patches are available in 60 x 60 m (6 x 6 pixels) and 200 x 200 m (20 x 20 pixels). Band order is B02, B03, B04, B08, B05, B06, B07, B8A, B11, B12, B01, and B09. Spatial resolution is 10 m.
The folder “labels” contains a JSON string which was used for multi-labeling of the training patches. Code example of an image sample with respective proportions of 94% for Abies and 6% for Larix is: "Abies_alba_3_834_WEFL_NLF.tif": [["Abies", 0.93771], ["Larix", 0.06229]]
The two files “test_filesnames.lst” and “train_filenames.lst” define the filenames used for train (90%) and test (10%) split. We refer to this fixed split for better reproducibility and comparability.
The folder “geojson” contains geoJSON files with all the samples chosen for the derivation of training patch generation (point, 60 m bounding box, 200 m bounding box).
CAUTION: As we could not upload the aerial patches as a single zip file on Zenodo, you need to download the 20 single species files (aerial_60m_…zip) separately. Then, unzip them into a folder named “aerial” with a subfolder named “60m”. This structure is recommended for better reproducibility and comparability to the experimental results of Ahlswede et al. (2022),
Join the archive
Model training, benchmarking, algorithm development… many applications are possible! Feel free to add samples from other regions in Europe or even worldwide. Additional remote sensing data from Lidar, UAVs or aerial imagery from different time steps are very welcome. This helps the research community in development of better deep learning and machine learning models for forest applications. You might have questions or want to share code/results/publications using that archive? Feel free to contact the authors.
Project description
This work was part of the project TreeSatAI (Artificial Intelligence with Satellite data and Multi-Source Geodata for Monitoring of Trees at Infrastructures, Nature Conservation Sites and Forests). Its overall aim is the development of AI methods for the monitoring of forests and woody features on a local, regional and global scale. Based on freely available geodata from different sources (e.g., remote sensing, administration maps, and social media), prototypes will be developed for the deep learning-based extraction and classification of tree- and tree stand features. These prototypes deal with real cases from the monitoring of managed forests, nature conservation and infrastructures. The development of the resulting services by three enterprises (liveEO, Vision Impulse and LUP Potsdam) will be supported by three research institutes (German Research Center for Artificial Intelligence, TU Remote Sensing Image Analysis Group, TUB Geoinformation in Environmental Planning Lab).
Publications
Ahlswede et al. (2022, in prep.): TreeSatAI Dataset Publication
Ahlswede S., Nimisha, T.M., and Demir, B. (2022, in revision): Embedded Self-Enhancement Maps for Weakly Supervised Tree Species Mapping in Remote Sensing Images. IEEE Trans Geosci Remote Sens
Schulz et al. (2022, in prep.): Phenoprofiling
Conference contributions
S. Ahlswede, N. T. Madam, C. Schulz, B. Kleinschmit and B. Demіr, "Weakly Supervised Semantic Segmentation of Remote Sensing Images for Tree Species Classification Based on Explanation Methods", IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022.
C. Schulz, M. Förster, S. Vulova, T. Gränzig and B. Kleinschmit, “Exploring the temporal fingerprints of mid-European forest types from Sentinel-1 RVI and Sentinel-2 NDVI time series”, IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022.
C. Schulz, M. Förster, S. Vulova and B. Kleinschmit, “The temporal fingerprints of common European forest types from SAR and optical remote sensing data”, AGU Fall Meeting, New Orleans, USA, 2021.
B. Kleinschmit, M. Förster, C. Schulz, F. Arias, B. Demir, S. Ahlswede, A. K. Aksoy, T. Ha Minh, J. Hees, C. Gava, P. Helber, B. Bischke, P. Habelitz, A. Frick, R. Klinke, S. Gey, D. Seidel, S. Przywarra, R. Zondag and B. Odermatt, “Artificial Intelligence with Satellite data and Multi-Source Geodata for Monitoring of Trees and Forests”, Living Planet Symposium, Bonn, Germany, 2022.
C. Schulz, M. Förster, S. Vulova, T. Gränzig and B. Kleinschmit, (2022, submitted): “Exploring the temporal fingerprints of sixteen mid-European forest types from Sentinel-1 and Sentinel-2 time series”, ForestSAT, Berlin, Germany, 2022.
The Atlas of the Biosphere is a product of the Center for Sustainability and the Global Environment (SAGE), part of the Gaylord Nelson Institute for Environmental Studies at the University of Wisconsin - Madison. The goal is to provide more information about the environment, and human interactions with the environment, than any other source.
The Atlas provides maps of an ever-growing number of environmental variables, under the following categories:
Human Impacts (Humans and the environment from a socio-economic perspective; i.e., Population, Life Expectancy, Literacy Rates);
Land Use (How humans are using the land; i.e., Croplands, Pastures, Urban Lands);
Ecosystems (The natural ecosystems of the world; i.e., Potential Vegetation, Temperature, Soil Texture); and
Water Resources (Water in the biosphere; i.e., Runoff, Precipitation, Lakes and Wetlands).
Map coverages are global and regional in spatial extent. Users can download map images (jpg) and data (a GIS grid of the data in ESRI ArcView Format), and can view metadata online.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Please note that this dataset is not an official City of Toronto land use dataset. It was created for personal and academic use using City of Toronto Land Use Maps (2019) found on the City of Toronto Official Plan website at https://www.toronto.ca/city-government/planning-development/official-plan-guidelines/official-plan/official-plan-maps-copy, along with the City of Toronto parcel fabric (Property Boundaries) found at https://open.toronto.ca/dataset/property-boundaries/ and Statistics Canada Census Dissemination Blocks level boundary files (2016). The property boundaries used were dated November 11, 2021. Further detail about the City of Toronto's Official Plan, consolidation of the information presented in its online form, and considerations for its interpretation can be found at https://www.toronto.ca/city-government/planning-development/official-plan-guidelines/official-plan/ Data Creation Documentation and Procedures Software Used The spatial vector data were created using ArcGIS Pro 2.9.0 in December 2021. PDF File Conversions Using Adobe Acrobat Pro DC software, the following downloaded PDF map images were converted to TIF format. 9028-cp-official-plan-Map-14_LandUse_AODA.pdf 9042-cp-official-plan-Map-22_LandUse_AODA.pdf 9070-cp-official-plan-Map-20_LandUse_AODA.pdf 908a-cp-official-plan-Map-13_LandUse_AODA.pdf 978e-cp-official-plan-Map-17_LandUse_AODA.pdf 97cc-cp-official-plan-Map-15_LandUse_AODA.pdf 97d4-cp-official-plan-Map-23_LandUse_AODA.pdf 97f2-cp-official-plan-Map-19_LandUse_AODA.pdf 97fe-cp-official-plan-Map-18_LandUse_AODA.pdf 9811-cp-official-plan-Map-16_LandUse_AODA.pdf 982d-cp-official-plan-Map-21_LandUse_AODA.pdf Georeferencing and Reprojecting Data Files The original projection of the PDF maps is unknown but were most likely published using MTM Zone 10 EPSG 2019 as per many of the City of Toronto's many datasets. They could also have possibly been published in UTM Zone 17 EPSG 26917 The TIF images were georeferenced in ArcGIS Pro using this projection with very good results. The images were matched against the City of Toronto's Centreline dataset found here The resulting TIF files and their supporting spatial files include: TOLandUseMap13.tfwx TOLandUseMap13.tif TOLandUseMap13.tif.aux.xml TOLandUseMap13.tif.ovr TOLandUseMap14.tfwx TOLandUseMap14.tif TOLandUseMap14.tif.aux.xml TOLandUseMap14.tif.ovr TOLandUseMap15.tfwx TOLandUseMap15.tif TOLandUseMap15.tif.aux.xml TOLandUseMap15.tif.ovr TOLandUseMap16.tfwx TOLandUseMap16.tif TOLandUseMap16.tif.aux.xml TOLandUseMap16.tif.ovr TOLandUseMap17.tfwx TOLandUseMap17.tif TOLandUseMap17.tif.aux.xml TOLandUseMap17.tif.ovr TOLandUseMap18.tfwx TOLandUseMap18.tif TOLandUseMap18.tif.aux.xml TOLandUseMap18.tif.ovr TOLandUseMap19.tif TOLandUseMap19.tif.aux.xml TOLandUseMap19.tif.ovr TOLandUseMap20.tfwx TOLandUseMap20.tif TOLandUseMap20.tif.aux.xml TOLandUseMap20.tif.ovr TOLandUseMap21.tfwx TOLandUseMap21.tif TOLandUseMap21.tif.aux.xml TOLandUseMap21.tif.ovr TOLandUseMap22.tfwx TOLandUseMap22.tif TOLandUseMap22.tif.aux.xml TOLandUseMap22.tif.ovr TOLandUseMap23.tfwx TOLandUseMap23.tif TOLandUseMap23.tif.aux.xml TOLandUseMap23.tif.ov Ground control points were saved for all georeferenced images. The files are the following: map13.txt map14.txt map15.txt map16.txt map17.txt map18.txt map19.txt map21.txt map22.txt map23.txt The City of Toronto's Property Boundaries shapefile, "property_bnds_gcc_wgs84.zip" were unzipped and also reprojected to EPSG 26917 (UTM Zone 17) into a new shapefile, "Property_Boundaries_UTM.shp" Mosaicing Images Once georeferenced, all images were then mosaiced into one image file, "LandUseMosaic20211220v01", within the project-generated Geodatabase, "Landuse.gdb" and exported TIF, "LandUseMosaic20211220.tif" Reclassifying Images Because the original images were of low quality and the conversion to TIF made the image colours even more inconsistent, a method was required to reclassify the images so that different land use classes could be identified. Using Deep learning Objects, the images were re-classified into useful consistent colours. Deep Learning Objects and Training The resulting mosaic was then prepared for reclassification using the Label Objects for Deep Learning tool in ArcGIS Pro. A training sample, "LandUseTrainingSamples20211220", was created in the geodatabase for all land use types as follows: Neighbourhoods Insitutional Natural Areas Core Employment Areas Mixed Use Areas Apartment Neighbourhoods Parks Roads Utility Corridors Other Open Spaces General Employment Areas Regeneration Areas Lettering (not a land use type, but an image colour (black), used to label streets). By identifying the letters, it then made the reclassification and vectorization results easier to clean up of unnecessary clutter caused by the labels of streets. Reclassification Once the training samples were created and saved, the raster was then reclassified using the Image Classification Wizard tool in ArcGIS Pro, using the Support...
This layer presents a nighttime view of the Earth that provides an informational and educational view of our planet at night. The image was produced by mosaicking Defense Meteorlogical Satellite Program (DMSP) Operational Linescan System (OLS) satellite images. This system was originally designed to view clouds by moonlight and to map the locations of permanent lights on the Earth's surface. These data are derived from 9 months of observations superimposed on a darkened land surface. ESRI georeferenced these data to a real-world coordinate system.The layer is suitable for display to a largest scale of 1:18,500,000, though this map over-samples it down to ~1:1,4,500,000 scale for context purposes. The layer does not display at all below that scale.The web map also includes a Boundaries and Places layer that can be turned on to see country boundaries and major cities.To access the source imagery used to publish this map service, you can use the Earth at Night Layer Package that is published by Esri.
Accurate and consistent mangrove extent at the national scale over time is a necessity for proper conservation planning for mangroves, including conservation and restoration. Many analyses conducted in Madagascar have accurately assessed mangroves, but have not been repeated over time. In addition, changing datasets and technologies result in differing approaches which, when applied cannot be accurately combined or compared. For this reason, WWF-Germany has undertaken the first consistent mangrove assessment over a 30 year time period using Landsat satellite imagery and automated processing in Google Earth Engine. The application of standardized and automated methods have allowed for a consistent view of mangrove extent and change from 1995-2018, at 30m resolution. Results are compared with existing analyses, and complimented by biomass assessment, hotspots of change and a first look at mangrove degradation.The full report is available here. the data can be downloaded here. Cloud-free Landsat image composites combining the best pixels from a calendar year were created for each time pivot: 1995, 2000, 2005, 2010, 2015 and 2018. When not enough observations met the cloud criteria, the time frame was expanded to include 6 months from the previous and following calendar year, so that some composites required 2 years of data to composite. In the case of 1990, not enough satellite images were available to create a complete coverage. All compositing and mapping was performed in Google Earth Engine.Starting with the latest date composite, a supervised classification using the RandomForest classifier was executed using available information and visual interpretation of Google Earth Imagery. Polygons of known mangrove and not-mangrove areas were digitized, and a random point sample selected in these two types to provide training and validation for the classification. The final mangrove layer was visually cleaned to remove any extraneous mangrove areas or errors.All training data for previous years were then automatically created using the later year mangrove map. This was performed using a dilate function to restrict random point selection to the core area of large mangrove patches. This enabled an almost entirely automatic process. Accuracy assessment was performed using mangrove extent mapped from 2.5m SPOT imagery for the year 2012, on the Landsat-derived map for the year 2010. Additional training points were acquired from Google Earth High resolution imagery.All mangrove map layers were then analyzed in a GIS. Mangrove extent for each time period was overlaid to assign transitions between mangrove/not mangrove over time (i.e. loss 1995; gain 2010). A dynamic class was assigned to any pixels which had at least 2 observations as mangrove over the 6 time steps, and underwent multiple gains and losses with no clear trend. Additionally, temporal filtering was used to remove any pixels classified with low confidence (i.e. a single year classified as mangrove but all other years before and after non-mangrove and vice versa). Mangrove extent and change (both gain and loss and net change) was quantified by district, region and protected area in hectares.The accuracy assessment was conducted using an area weighted stratified random sampling design. As current field validation data was not available, the mangrove classification result of OCEA (2015) was used to sample control points. This classification was generated for the year 2013 from SPOT 2m resolution data. Control points for mangrove areas to compare with the Landsat 2015 map were set in areas of dense mangrove (attribute “COUVERT” = dense) with at least 2 ha area and at least 30m away from the mangrove edge to avoid spatial resolution errors. Non-mangrove control points were set in areas of bare land (attribute “COUVERT” = “nu”) with at least 2 ha area and 30m away from the mangrove edge. In total 4,000 control points of for each of the two classes were placed randomly at least 30m apart from each other.The change map was also evaluated using an area weighted stratified random sampling design. As field validation data was not available current and historic satellite images in Google Earth were used to sample control points after creating396 stratified random sample points from the classification results of each of the four classes placed randomly and area weighted at least 30m apart from each other. For this assessment the AcATaMa toolbox was used in QGIS.The estimates of mangrove area at the national scale along with estimated annual % change are shown in table 1. More detailed information are provided in the Annex. Table 1. Mangrove extent (ha) estimated for Madagascar and annual % change, calculated from the prior mangrove extentYearMangrove (ha)Annual % Change1995310 452-2000294 387-1.032005279 618-1.02010258 340-1.522015239 152-1.492018236 402-0.38**as the analysis was complete mid-2018 these results are expected to be underestimated. The figures provided by this study lie within the range of existing published values, and are most similar to Giri & Maulhausen, 2008 who use a similar data and approach.Table 3: Producer and User Accuracy and Errors.Standard calculation PAUAOCloss98,75%61,24%1,25%38,76%dynamic77,78%93,33%22,22%6,67%stable81,41%99,10%18,59%0,90%gain100,00%87,50%0,00%12,50%
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Subtitle: "Mapping Satellite Images: A Comprehensive Dataset"
About Dataset:
This dataset is designed for the task of mapping satellite images to corresponding map representations using advanced techniques like pix2pix GANs. It is structured to facilitate training and validation for machine learning models, providing a robust foundation for image-to-image translation projects.
maps
This dataset is ideal for developing and testing models that perform image translation from satellite photos to map images, supporting various applications in remote sensing, urban planning, and geographic information systems (GIS).