100+ datasets found
  1. G

    GIS Resource Compilation Map Package - Applications of Machine Learning...

    • gdr.openei.org
    • data.openei.org
    • +3more
    Updated Jun 1, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren; Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren (2021). GIS Resource Compilation Map Package - Applications of Machine Learning Techniques to Geothermal Play Fairway Analysis in the Great Basin Region, Nevada [Dataset]. http://doi.org/10.15121/1897037
    Explore at:
    Dataset updated
    Jun 1, 2021
    Dataset provided by
    USDOE Office of Energy Efficiency and Renewable Energy (EERE), Renewable Power Office. Geothermal Technologies Program (EE-4G)
    Geothermal Data Repository
    Nevada Bureau of Mines and Geology
    Authors
    Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren; Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Great Basin, Nevada
    Description

    This submission contains an ESRI map package (.mpk) with an embedded geodatabase for GIS resources used or derived in the Nevada Machine Learning project, meant to accompany the final report. The package includes layer descriptions, layer grouping, and symbology. Layer groups include: new/revised datasets (paleo-geothermal features, geochemistry, geophysics, heat flow, slip and dilation, potential structures, geothermal power plants, positive and negative test sites), machine learning model input grids, machine learning models (Artificial Neural Network (ANN), Extreme Learning Machine (ELM), Bayesian Neural Network (BNN), Principal Component Analysis (PCA/PCAk), Non-negative Matrix Factorization (NMF/NMFk) - supervised and unsupervised), original NV Play Fairway data and models, and NV cultural/reference data.

    See layer descriptions for additional metadata. Smaller GIS resource packages (by category) can be found in the related datasets section of this submission. A submission linking the full codebase for generating machine learning output models is available through the "Related Datasets" link on this page, and contains results beyond the top picks present in this compilation.

  2. Esri Maps for Public Policy

    • climate-center-lincolninstitute.hub.arcgis.com
    • hub-lincolninstitute.hub.arcgis.com
    Updated Oct 1, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2019). Esri Maps for Public Policy [Dataset]. https://climate-center-lincolninstitute.hub.arcgis.com/datasets/esri::esri-maps-for-public-policy
    Explore at:
    Dataset updated
    Oct 1, 2019
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    OVERVIEWThis site is dedicated to raising the level of spatial and data literacy used in public policy. We invite you to explore curated content, training, best practices, and datasets that can provide a baseline for your research, analysis, and policy recommendations. Learn about emerging policy questions and how GIS can be used to help come up with solutions to those questions.EXPLOREGo to your area of interest and explore hundreds of maps about various topics such as social equity, economic opportunity, public safety, and more. Browse and view the maps, or collect them and share via a simple URL. Sharing a collection of maps is an easy way to use maps as a tool for understanding. Help policymakers and stakeholders use data as a driving factor for policy decisions in your area.ISSUESBrowse different categories to find data layers, maps, and tools. Use this set of content as a driving force for your GIS workflows related to policy. RESOURCESTo maximize your experience with the Policy Maps, we’ve assembled education, training, best practices, and industry perspectives that help raise your data literacy, provide you with models, and connect you with the work of your peers.

  3. Geospatial Deep Learning Seminar Online Course

    • ckan.americaview.org
    Updated Nov 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ckan.americaview.org (2021). Geospatial Deep Learning Seminar Online Course [Dataset]. https://ckan.americaview.org/dataset/geospatial-deep-learning-seminar-online-course
    Explore at:
    Dataset updated
    Nov 2, 2021
    Dataset provided by
    CKANhttps://ckan.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification. The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively. After completing this seminar you will be able to: explain how ANNs work including weights, bias, activation, and optimization. describe and explain different loss and assessment metrics and determine appropriate use cases. use the tensor data model to represent data as input for deep learning. explain how CNNs work including convolutional operations/layers, kernel size, stride, padding, max pooling, activation, and batch normalization. use PyTorch, Python, and R to prepare data, produce and assess scene classification models, and infer to new data. explain common semantic segmentation architectures and how these methods allow for pixel-level classification and how they are different from traditional CNNs. use PyTorch, Python, and R (or ArcGIS Pro) to prepare data, produce and assess semantic segmentation models, and infer to new data.

  4. Inform E-learning GIS Course

    • tuvalu-data.sprep.org
    • fsm-data.sprep.org
    • +13more
    pdf
    Updated Feb 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SPREP (2025). Inform E-learning GIS Course [Dataset]. https://tuvalu-data.sprep.org/dataset/inform-e-learning-gis-course
    Explore at:
    pdf(1335336), pdf(587295), pdf(658923), pdf(501586)Available download formats
    Dataset updated
    Feb 20, 2025
    Dataset provided by
    Pacific Regional Environment Programmehttps://www.sprep.org/
    License

    Public Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
    License information was derived automatically

    Area covered
    Pacific Region
    Description

    This dataset holds all materials for the Inform E-learning GIS course

  5. w

    Dataset of books called Learning GIS using open source software : an applied...

    • workwithdata.com
    Updated Apr 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2025). Dataset of books called Learning GIS using open source software : an applied guide for geo-spatial analysis [Dataset]. https://www.workwithdata.com/datasets/books?f=1&fcol0=book&fop0=%3D&fval0=Learning+GIS+using+open+source+software+%3A+an+applied+guide+for+geo-spatial+analysis
    Explore at:
    Dataset updated
    Apr 17, 2025
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about books. It has 1 row and is filtered where the book is Learning GIS using open source software : an applied guide for geo-spatial analysis. It features 7 columns including author, publication date, language, and book publisher.

  6. a

    Caribou Crashes

    • maine.hub.arcgis.com
    Updated Jun 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    State of Maine (2024). Caribou Crashes [Dataset]. https://maine.hub.arcgis.com/datasets/7fd04f27cbda46b8ae7afdbf3715ef40
    Explore at:
    Dataset updated
    Jun 13, 2024
    Dataset authored and provided by
    State of Maine
    Area covered
    Description

    This crash dataset does include crashes from 2023 up until near the middle of July that have been reviewed and loaded into the Maine DOT Asset Warehouse. This crash dataset is static and was put together as an example showing the clustering functionality in ArcGIS Online. In addition the dataset was designed with columns that include data items at the Unit and Persons levels of a crash. The feature layer visualization by default will show the crashes aggregated by the predominant crash type along the corridor. The aggregation settings can be toggled off if desired and crashes can be viewed by the type of crash. Both the aggregation and standard Feature Layer configurations do include popup settings that have been configured.As mentioned above, the Feature Layer itself has been configured to include a standard unique value renderer based on Crash Type and the layer also includes clustering aggregation configurations that could be toggled on or off if the user were to add this layer to a new ArcGIS Online Map. Clustering and aggregation options in ArcGIS Online provide functionality that is not yet available in the latest version of ArcGIS Pro (<=3.1). This additional configuration includes how to show the popup content for the cluster of crashes. Users interested in learning more about clustering and aggregation in ArcGIS Online and some more advanced options should see the following ESRI article (https://www.esri.com/arcgis-blog/products/arcgis-online/mapping/summarize-and-explore-point-clusters-with-arcade-in-popups/).Popups have been configured for both the clusters and the individual crashes. The individual crashes themselves do include multiple tables within a single text element. The bottom table does include data items that pertain to at a maximum of three units for a crash. If a crash includes just one unit then this bottom table will include only 2 columns. For each additional unit involved in a crash an additional column will appear listing out those data items that pertain to that unit up to a maximum of 3 units. There are crashes that do include more than 3 units and information for these additional units is not currently included in the dataset at the moment. The crash data items available in this Feature Layer representation includes many of the same data items from the Crash Layer (10 Years) that are available for use in Maine DOT's Public Map Viewer Application that can be accessed from the following link(https://www.maine.gov/mdot/mapviewer/?added=Crashes%20-%2010%20Years). However this crash data includes data items that are not yet available in other GIS Crash Departments used in visualizations by the department currently. These additional data items can be aggregated using other presentation types such as a Chart, but could also be filtered in the map. Users should refer to the unit count associated to each crash and be aware when a units information may not be visible in those situations where there are four or more units involved in a crash.

  7. Data from: Segment Anything Model (SAM)

    • hub.arcgis.com
    • uneca.africageoportal.com
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://hub.arcgis.com/content/9b67b441f29f4ce6810979f5f0667ebe
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  8. OpenStreetMap (Blueprint)

    • catalog.data.gov
    • gimi9.com
    • +14more
    Updated Jun 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2024). OpenStreetMap (Blueprint) [Dataset]. https://catalog.data.gov/dataset/openstreetmap-blueprint-653c6
    Explore at:
    Dataset updated
    Jun 8, 2024
    Dataset provided by
    Esrihttp://esri.com/
    Description

    This web map features a vector basemap of OpenStreetMap (OSM) data created and hosted by Esri. Esri produced this vector tile basemap in ArcGIS Pro from a live replica of OSM data, hosted by Esri, and rendered using a creative cartographic style emulating a blueprint technical drawing. The vector tiles are updated every few weeks with the latest OSM data. This vector basemap is freely available for any user or developer to build into their web map or web mapping apps.OpenStreetMap (OSM) is an open collaborative project to create a free editable map of the world. Volunteers gather location data using GPS, local knowledge, and other free sources of information and upload it. The resulting free map can be viewed and downloaded from the OpenStreetMap site: www.OpenStreetMap.org. Esri is a supporter of the OSM project and is excited to make this new vector basemap available available to the OSM, GIS, and Developer communities.

  9. Optical Character Recognition

    • hub.arcgis.com
    • sdiinnovation-geoplatform.hub.arcgis.com
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Optical Character Recognition [Dataset]. https://hub.arcgis.com/content/8b56ed53e34b4304a5b8b826a7512ab0
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Text labels are an integral part of cadastral maps and floor plans. Text is also prevalent in natural scenes around us in the form of road signs, billboards, house numbers and place names. Extracting this text can provide additional context and details about the places the text describes and the information it conveys. Digitization of documents and extracting texts from them helps in retrieving and archiving of important information.This deep learning model is based on the MMOCR model and uses optical character recognition (OCR) technology to detect text in images. This model was trained on a large dataset of different types and styles of text with diverse background and contexts, allowing for precise text extraction. It can be applied to various tasks such as automatically detecting and reading text from documents, sign boards, scanned maps, etc., thereby converting images containing text to actionable data.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.InputHigh-resolution, 3-band street-level imagery/oriented imagery, scanned maps, or documents, with medium to large size text.OutputA feature layer with the recognized text and bounding box around it.Model architectureThis model is based on the open-source MMOCR model by MMLab.Sample resultsHere are a few results from the model.

  10. Sentinel-2 10m Land Use/Land Cover Time Series

    • opendata.rcmrd.org
    • cacgeoportal.com
    • +10more
    Updated Oct 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2022). Sentinel-2 10m Land Use/Land Cover Time Series [Dataset]. https://opendata.rcmrd.org/datasets/cfcb7609de5f478eb7666240902d4d3d
    Explore at:
    Dataset updated
    Oct 19, 2022
    Dataset authored and provided by
    Esrihttp://esri.com/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2024 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2024. Key Properties Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryAnalysis: Optimized for analysisClass Definitions: ValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Usage Information and Best PracticesProcessing TemplatesThis layer includes a number of preconfigured processing templates (raster function templates) to provide on-the-fly data rendering and class isolation for visualization and analysis. Each processing template includes labels and descriptions to characterize the intended usage. This may include for visualization, for analysis, or for both visualization and analysis. VisualizationThe default rendering on this layer displays all classes.There are a number of on-the-fly renderings/processing templates designed specifically for data visualization.By default, the most recent year is displayed. To discover and isolate specific years for visualization in Map Viewer, try using the Image Collection Explorer. AnalysisIn order to leverage the optimization for analysis, the capability must be enabled by your ArcGIS organization administrator. More information on enabling this feature can be found in the ‘Regional data hosting’ section of this help doc.Optimized for analysis means this layer does not have size constraints for analysis and it is recommended for multisource analysis with other layers optimized for analysis. See this group for a complete list of imagery layers optimized for analysis.Prior to running analysis, users should always provide some form of data selection with either a layer filter (e.g. for a specific date range, cloud cover percent, mission, etc.) or by selecting specific images. To discover and isolate specific images for analysis in Map Viewer, try using the Image Collection Explorer.Zonal Statistics is a common tool used for understanding the composition of a specified area by reporting the total estimates for each of the classes. GeneralIf you are new to Sentinel-2 LULC, the Sentinel-2 Land Cover Explorer provides a good introductory user experience for working with this imagery layer. For more information, see this Quick Start Guide.Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch. CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.

  11. Land Cover Classification (Landsat 8)

    • angola.africageoportal.com
    • cacgeoportal.com
    • +7more
    Updated Sep 20, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2020). Land Cover Classification (Landsat 8) [Dataset]. https://angola.africageoportal.com/content/e732ee81a9c14c238a14df554a8e3225
    Explore at:
    Dataset updated
    Sep 20, 2020
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Land cover describes the surface of the earth. Land cover maps are useful in urban planning, resource management, change detection, agriculture, and a variety of other applications in which information related to earth surface is required. Land cover classification is a complex exercise and is hard to capture using traditional means. Deep learning models are highly capable of learning these complex semantics and can produce superior results.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputRaster, mosaic dataset, or image service. (Preferred cell size is 30 meters.)OutputClassified raster with the same classes as in the National Land Cover Database (NLCD) 2016.Note: The classified raster contains 20 classes based on a modified Anderson Level II classification system as used by the National Land Cover Database.Applicable geographiesThis model is expected to work well in the United States.Model architectureThis model uses the UNet model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an overall accuracy of 77 percent. The table below summarizes the precision, recall and F1-score of the model on the validation dataset.ClassCollection 2 Level 2 ImageryCollection 1 Level 1 ImageryPrecisionRecallF1 ScorePrecisionRecallF1 ScoreOpen Water0.960.970.960.950.970.96Perennial Snow/Ice0.860.690.770.490.940.64Developed, Open Space0.510.380.440.430.380.4Developed, Low Intensity0.520.460.490.470.480.47Developed, Medium Intensity0.540.50.520.490.540.51Developed, High Intensity0.670.540.60.550.680.61Barren Land0.760.590.660.60.770.68Deciduous Forest0.740.810.780.780.760.77Evergreen Forest0.770.820.790.80.820.81Mixed Forest0.560.470.510.50.530.51Shrub/Scrub0.820.820.820.840.810.83Herbaceous0.780.740.760.790.770.78Hay/Pasture0.70.740.720.670.750.71Cultivated Crops0.870.910.890.910.90.9Woody Wetlands0.70.680.690.670.680.68Emergent Herbaceous Wetlands0.720.540.620.540.610.57Training dataThis model has been trained on the National Land Cover Database (NLCD) 2016 with the same Landsat 8 scenes that were used to produce the database. Scene IDs for the imagery were available in the metadata of the dataset.Sample resultsHere are a few results from the model.

  12. d

    Allegheny County Public Schools / Local Education Agency (LEAs) Locations

    • catalog.data.gov
    • s.cnmilf.com
    • +2more
    Updated May 14, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Allegheny County (2023). Allegheny County Public Schools / Local Education Agency (LEAs) Locations [Dataset]. https://catalog.data.gov/dataset/allegheny-county-public-schools-local-education-agency-leas-locations
    Explore at:
    Dataset updated
    May 14, 2023
    Dataset provided by
    Allegheny County
    Area covered
    Allegany County Public Schools
    Description

    These geocoded locations are based on the Allegheny County extract of Educational Names & Addresses (EdNA) via Pennsylvania Department of Education website as of April 19, 2018. Several addresses were not able to be geocoded (ex. If PO Box addresses were provided, they were not geocoded.)If viewing this description on the Western Pennsylvania Regional Data Center’s open data portal (http://www.wprdc.org), this dataset is harvested on a weekly basis from Allegheny County’s GIS data portal (http://openac.alcogis.opendata.arcgis.com/). The full metadata record for this dataset can also be found on Allegheny County’s GIS portal. You can access the metadata record and other resources on the GIS portal by clicking on the “Explore” button (and choosing the “Go to resource” option) to the right of the “ArcGIS Open Dataset” text below. Category: Education Organization: Allegheny County Department: Department of Human Services Temporal Coverage: as of April 19, 2018 Data Notes: Coordinate System: GCS_North_American_1983 Development Notes: none Other: none Related Document(s): Data Dictionary - none Frequency - Data Change: April, 19, 2018 data Frequency - Publishing: one-time Data Steward Name: See http://www.edna.ed.state.pa.us/Screens/Extracts/wfExtractEntitiesAdmin.aspx for more information. Data Steward Email: RA-DDQDataCollection@pa.gov (Data Collection Team)

  13. A

    Non Public Schools

    • data.boston.gov
    • cloudcity.ogopendata.com
    • +3more
    Updated Dec 18, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Boston Maps (2023). Non Public Schools [Dataset]. https://data.boston.gov/dataset/non-public-schools
    Explore at:
    arcgis geoservices rest api, csv, html, shp, geojson, kmlAvailable download formats
    Dataset updated
    Dec 18, 2023
    Dataset authored and provided by
    Boston Maps
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    This point datalayer shows the locations of schools in Massachusetts. Schools appearing in this layer are those attended by students in pre-kindergarten through high school. Categories of schools include public, private, charter, collaborative programs, and approved special education. This data was originally developed by the Massachusetts Department of Environmental Protection’s (DEP) GIS Program based on database information provided by the Massachusetts Department of Education (DOE). The update published on April 17th, 2009 was based on listings MassGIS obtained from the DOE as of February 9th, 2009. The layer is stored in ArcSDE and distributed as SCHOOLS_PT. Only schools located in Massachusetts are included in this layer. The DOE also provides a listing of out-of-state schools open to Massachusetts' residents, particularly for those with special learning requirements. Please see http://profiles.doe.mass.edu/outofstate.asp for details. Updated September 2018.

  14. d

    California Land Ownership

    • catalog.data.gov
    • data.cnra.ca.gov
    • +8more
    Updated Oct 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CAL FIRE (2025). California Land Ownership [Dataset]. https://catalog.data.gov/dataset/california-land-ownership-b6394
    Explore at:
    Dataset updated
    Oct 23, 2025
    Dataset provided by
    CAL FIRE
    Area covered
    California
    Description

    This dataset was updated May, 2025.This ownership dataset was generated primarily from CPAD data, which already tracks the majority of ownership information in California. CPAD is utilized without any snapping or clipping to FRA/SRA/LRA. CPAD has some important data gaps, so additional data sources are used to supplement the CPAD data. Currently this includes the most currently available data from BIA, DOD, and FWS. Additional sources may be added in subsequent versions. Decision rules were developed to identify priority layers in areas of overlap.Starting in 2022, the ownership dataset was compiled using a new methodology. Previous versions attempted to match federal ownership boundaries to the FRA footprint, and used a manual process for checking and tracking Federal ownership changes within the FRA, with CPAD ownership information only being used for SRA and LRA lands. The manual portion of that process was proving difficult to maintain, and the new method (described below) was developed in order to decrease the manual workload, and increase accountability by using an automated process by which any final ownership designation could be traced back to a specific dataset.The current process for compiling the data sources includes: Clipping input datasets to the California boundary Filtering the FWS data on the Primary Interest field to exclude lands that are managed by but not owned by FWS (ex: Leases, Easements, etc) Supplementing the BIA Pacific Region Surface Trust lands data with the Western Region portion of the LAR dataset which extends into California. Filtering the BIA data on the Trust Status field to exclude areas that represent mineral rights only. Filtering the CPAD data on the Ownership Level field to exclude areas that are Privately owned (ex: HOAs) In the case of overlap, sources were prioritized as follows: FWS > BIA > CPAD > DOD As an exception to the above, DOD lands on FRA which overlapped with CPAD lands that were incorrectly coded as non-Federal were treated as an override, such that the DOD designation could win out over CPAD.In addition to this ownership dataset, a supplemental _source dataset is available which designates the source that was used to determine the ownership in this dataset. Data Sources: GreenInfo Network's California Protected Areas Database (CPAD2023a). https://www.calands.org/cpad/; https://www.calands.org/wp-content/uploads/2023/06/CPAD-2023a-Database-Manual.pdf US Fish and Wildlife Service FWSInterest dataset (updated December, 2023). https://gis-fws.opendata.arcgis.com/datasets/9c49bd03b8dc4b9188a8c84062792cff_0/explore Department of Defense Military Bases dataset (updated September 2023) https://catalog.data.gov/dataset/military-bases Bureau of Indian Affairs, Pacific Region, Surface Trust and Pacific Region Office (PRO) land boundaries data (2023) via John Mosley John.Mosley@bia.gov Bureau of Indian Affairs, Land Area Representations (LAR) and BIA Regions datasets (updated Oct 2019) https://biamaps.doi.gov/bogs/datadownload.html Data Gaps & Changes:Known gaps include several BOR, ACE and Navy lands which were not included in CPAD nor the DOD MIRTA dataset. Our hope for future versions is to refine the process by pulling in additional data sources to fill in some of those data gaps. Additionally, any feedback received about missing or inaccurate data can be taken back to the appropriate source data where appropriate, so fixes can occur in the source data, instead of just in this dataset.25_1: The CPAD Input dataset was amended to merge large gaps in certain areas of the state known to be erroneous, such as Yosemite National Park, and to eliminate overlaps from the original input. The FWS input dataset was updated in February of 2025, and the DOD input dataset was updated in October of 2024. The BIA input dataset was the same as was used for the previous ownership version.24_1: Input datasets this year included numerous changes since the previous version, particularly the CPAD and DOD inputs. Of particular note was the re-addition of Camp Pendleton to the DOD input dataset, which is reflected in this version of the ownership dataset. We were unable to obtain an updated input for tribral data, so the previous inputs was used for this version.23_1: A few discrepancies were discovered between data changes that occurred in CPAD when compared with parcel data. These issues will be taken to CPAD for clarification for future updates, but for ownership23_1 it reflects the data as it was coded in CPAD at the time. In addition, there was a change in the DOD input data between last year and this year, with the removal of Camp Pendleton. An inquiry was sent for clarification on this change, but for ownership23_1 it reflects the data per the DOD input dataset.22_1 : represents an initial version of ownership with a new methodology which was developed under a short timeframe. A comparison with previous versions of ownership highlighted the some data gaps with the current version. Some of these known gaps include several BOR, ACE and Navy lands which were not included in CPAD nor the DOD MIRTA dataset. Our hope for future versions is to refine the process by pulling in additional data sources to fill in some of those data gaps. In addition, any topological errors (like overlaps or gaps) that exist in the input datasets may thus carry over to the ownership dataset. Ideally, any feedback received about missing or inaccurate data can be taken back to the relevant source data where appropriate, so fixes can occur in the source data, instead of just in this dataset.

  15. d

    Public School Facility Location History

    • opendata.dc.gov
    • gimi9.com
    • +4more
    Updated Aug 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Washington, DC (2025). Public School Facility Location History [Dataset]. https://opendata.dc.gov/datasets/DCGIS::public-school-facility-location-history
    Explore at:
    Dataset updated
    Aug 15, 2025
    Dataset authored and provided by
    City of Washington, DC
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Pursuant to D.C. Official Code § 38-2803, the Mayor of the District of Columbia is required to prepare a 10-year Master Facilities Plan for public education facilities and additional annual supplements. As part of this, the Office of the Deputy Mayor for Education collects and updates information about public school facilities in active use and the public schools that occupy them. This dataset contains historic data on public school facilities from SY13-14 to SY25-26 and serves as a record of change of the public school ecosystem over this period. For each year contained in the dataset, users can explore the number of Local Education Agencies (LEA) and Schools as well as their locations and, for District of Columbia Public Schools (DCPS), where they were located during modernization projects.

  16. w

    Sea Level Rise - ArcGIS Living Atlas - Indicators of the Planet - Dataset -...

    • wbwaterdata.org
    Updated Jan 26, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Sea Level Rise - ArcGIS Living Atlas - Indicators of the Planet - Dataset - waterdata [Dataset]. https://wbwaterdata.org/dataset/sea-level-rise-arcgis-living-atlas-indicators-of-the-planet
    Explore at:
    Dataset updated
    Jan 26, 2021
    Description

    Sea levels around the globe are increasing as ocean temperature warm and cause the water to expand in volume, along with land-based ice melting and increasing the amount of water in the ocean. Rising sea levels not only make coastal living more dangerous from storm flooding and erosion, but also cause significant habitat loss and impacts to ecosystems. Satellites work in conjunction with tide gauges to give us both a global and local perspective of changes in sea level. We can see the overall trends in satellite-based maps, along with accurate hour-by-hour changes at the local level from gauges.

  17. a

    Cultural Spaces Inventory - Learning

    • hamhanding-dcdev.opendata.arcgis.com
    • open.ottawa.ca
    • +3more
    Updated Apr 14, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Ottawa (2023). Cultural Spaces Inventory - Learning [Dataset]. https://hamhanding-dcdev.opendata.arcgis.com/datasets/ottawa::cultural-spaces-inventory-learning
    Explore at:
    Dataset updated
    Apr 14, 2023
    Dataset authored and provided by
    City of Ottawa
    License

    https://ottawa.ca/en/city-hall/get-know-your-city/open-data#open-data-licence-version-2-0https://ottawa.ca/en/city-hall/get-know-your-city/open-data#open-data-licence-version-2-0

    Area covered
    Description

    Accuracy:“Culture” is a complicated concept. It can mean many different things. Therefore, there is no simple answer to the question of what constitutes a space for cultural activities or events. The project partners used a broad definition that includes spaces in several categories, including venues,studios, learning spaces, community spaces (including places of worship), food, sport, stores, heritage, nature (including parks), and public art. The project is ongoing and we are always open to suggestions on how to improve this dataset. The City will be maintaining this dataset going forward, and we may revisit the categorization and types of spaces we include in the future.When using these data, it is important to keep in the mind the following:Not all categories are necessarily relevant to all users for all purposes.The dataset, while extensive (3,800+ spaces), may not be exhaustive. If you notice anything that is missing, please contact the Data Steward.Change occurs frequently. The City will be developing a plan for the long-term maintenance of this dataset. The data will be updated frequently, but there can be outdated information. If you notice anything that is out-of-date, please contact the Data Steward.The fields called “Size (Capacity/# seats)” and “Accessibility” have not yet been populated, as the research is still ongoing.Update Frequency: As neededAttributes:FieldsUnique ID – This is a unique identifier for each cultural space in the dataset.Category – See below for category descriptions.Sub-Category – Another level of categorization.Tags – Key words associated with a cultural space. Used for search.Name – The current name of a cultural space. Alternate or historic names may be identified in “Alternate Names”.Name FR – This field is used if the space has different English and French names.Latitude – Geographic coordinate.Longitude – Geographic coordinate.Address – Civic address in English.Address FR – Civic address in French.Location Notes – Additional notes in English to describe the location of the space if need. For example, explaining specifically where within a large campus, park, complex, etc. a space is located.Location Notes FR – Additional notes in French to describe the location of the space if need. For example, explaining specifically where within a large campus, park, complex, etc. a space is located.City – Most spaces are in Ottawa. Some spaces in Gatineau and other surrounding communities have been included for context.Province – Most spaces are in Ontario, though some are in Quebec as noted above.Postal Code – This is the postal code associated with the mailing address of the space. In some cases, that may be different than the postal code that would be associated with its physical location.Phone – All contact info was collected from public sources.Email – All contact info was collected from public sources.WebsiteWebsite_FR"Outdoor Component (Y/N)" – Indicates whether at least part of the space is outdoors."Active (Y/N)" – Spaces that are no longer in use (e.g. venue has closed) will be marked as “N” (inactive)."Seasonal Constraints" – Indicates whether there are limits on what times of the year a space can be used.Last Modified – The date that information about the space was last updated in this inventory.Additional Notes 1 – The “Additional Notes” fields are used to note any other pertinent information about a space. These are open text fields (unstructured).Additional Notes 1 (FR)Additional Notes 2Additional Notes 2 (FR)Additional Notes 3Additional Notes 3 (FR)Alternate Names – Other names used to refer to a place, including historic names or names in languages other than English and French.Size (Capacity/# seats) – This field is meant to give an indication of the size of the space. The way this is measured may be different depending on the type of space. For example, the number of seats is a good measure of the size/capacity of a theatre, but that would not be appropriate for other types of spaces. This field has not yet been populated, as the research is still ongoing.Accessibility – This field is meant to give an indication of the accessibility of a space. This field has not yet been populated, as the research is still ongoing.Apt613 Link – For spaces that do not have a website or another web presence, a link to articles published by Apt613 that mention the space is included.CategoriesVenue – Available for programming/cultural presentations.Studio – Spaces for creation of cultural products.Learning – Museums, libraries, archives, education, lessons, etc.Community – Community centres, places of worship, fairgrounds, etc.Food – Cafes, restaurants, bars, etc., particularly those that have a venue component to them, plus other restaurants, vineyards, breweries, roasteries, food retail spaces, etc. the researchers felt were creative spaces or had a certain cultural significance. (Note: it could be argued that almost any food space has cultural significance. We may consider expanding this category in the future. Please feel free to share your thoughts with the Data Steward.)Sport – Fitness, recreational, club, etc.Store – Stores that have a specialty, cultural, or artistic component to them, such as record stores, music instrument stores, or art or photography supply stores.Heritage – Historic sites, archives, etc.Nature – Parks, beaches, etc.Public Art – Murals, sculptures, etc., including artworks that are not in the City’s collection.Contact: Andrew Cooper

  18. H

    AReNA’s DHS-GIS Database

    • dataverse.harvard.edu
    Updated Feb 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    International Food Policy Research Institute (IFPRI) (2021). AReNA’s DHS-GIS Database [Dataset]. http://doi.org/10.7910/DVN/OQIPRW
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 23, 2021
    Dataset provided by
    Harvard Dataverse
    Authors
    International Food Policy Research Institute (IFPRI)
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7910/DVN/OQIPRWhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7910/DVN/OQIPRW

    Time period covered
    1980 - 2019
    Area covered
    Mali, Burundi, Myanmar, Bangladesh, Kenya, Lesotho, Nepal, Benin, Nigeria, Rwanda
    Dataset funded by
    The Bill & Melinda Gates Foundation
    Description

    Advancing Research on Nutrition and Agriculture (AReNA) is a 6-year, multi-country project in South Asia and sub-Saharan Africa funded by the Bill and Melinda Gates Foundation, being implemented from 2015 through 2020. The objective of AReNA is to close important knowledge gaps on the links between nutrition and agriculture, with a particular focus on conducting policy-relevant research at scale and crowding in more research on this issue by creating data sets and analytical tools that can benefit the broader research community. Much of the research on agriculture and nutrition is hindered by a lack of data, and many of the datasets that do contain both agriculture and nutrition information are often small in size and geographic scope. AReNA team constructed a large multi-level, multi-country dataset combining nutrition and nutrition-relevant information at the individual and household level from the Demographic and Health Surveys (DHS) with a wide variety of geo-referenced data on agricultural production, agroecology, climate, demography, and infrastructure (GIS data). This dataset includes 60 countries, 184 DHS, and 122,473 clusters. Over one thousand geospatial variables are linked with DHS. The entire dataset is organized into 13 individual files: DHS_distance, DHS_livestock, DHS_main, DHS_malaria, DHS NDVI, DHS_nightlight, DHS_pasture and climate (mean), DHS_rainfall, DHS_soil, DHS_SPAM, DHS_suit, DHS_temperature, and DHS_traveltime.

  19. S

    Two residential districts datasets from Kielce, Poland for building semantic...

    • scidb.cn
    Updated Sep 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agnieszka Łysak (2022). Two residential districts datasets from Kielce, Poland for building semantic segmentation task [Dataset]. http://doi.org/10.57760/sciencedb.02955
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 29, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Agnieszka Łysak
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Poland, Kielce
    Description

    Today, deep neural networks are widely used in many computer vision problems, also for geographic information systems (GIS) data. This type of data is commonly used for urban analyzes and spatial planning. We used orthophotographic images of two residential districts from Kielce, Poland for research including urban sprawl automatic analysis with Transformer-based neural network application.Orthophotomaps were obtained from Kielce GIS portal. Then, the map was manually masked into building and building surroundings classes. Finally, the ortophotomap and corresponding classification mask were simultaneously divided into small tiles. This approach is common in image data preprocessing for machine learning algorithms learning phase. Data contains two original orthophotomaps from Wietrznia and Pod Telegrafem residential districts with corresponding masks and also their tiled version, ready to provide as a training data for machine learning models.Transformed-based neural network has undergone a training process on the Wietrznia dataset, targeted for semantic segmentation of the tiles into buildings and surroundings classes. After that, inference of the models was used to test model's generalization ability on the Pod Telegrafem dataset. The efficiency of the model was satisfying, so it can be used in automatic semantic building segmentation. Then, the process of dividing the images can be reversed and complete classification mask retrieved. This mask can be used for area of the buildings calculations and urban sprawl monitoring, if the research would be repeated for GIS data from wider time horizon.Since the dataset was collected from Kielce GIS portal, as the part of the Polish Main Office of Geodesy and Cartography data resource, it may be used only for non-profit and non-commertial purposes, in private or scientific applications, under the law "Ustawa z dnia 4 lutego 1994 r. o prawie autorskim i prawach pokrewnych (Dz.U. z 2006 r. nr 90 poz 631 z późn. zm.)". There are no other legal or ethical considerations in reuse potential.Data information is presented below.wietrznia_2019.jpg - orthophotomap of Wietrznia districtmodel's - used for training, as an explanatory imagewietrznia_2019.png - classification mask of Wietrznia district - used for model's training, as a target imagewietrznia_2019_validation.jpg - one image from Wietrznia district - used for model's validation during training phasepod_telegrafem_2019.jpg - orthophotomap of Pod Telegrafem district - used for model's evaluation after training phasewietrznia_2019 - folder with wietrznia_2019.jpg (image) and wietrznia_2019.png (annotation) images, divided into 810 tiles (512 x 512 pixels each), tiles with no information were manually removed, so the training data would contain only informative tilestiles presented - used for the model during training (images and annotations for fitting the model to the data)wietrznia_2019_vaidation - folder with wietrznia_2019_validation.jpg image divided into 16 tiles (256 x 256 pixels each) - tiles were presented to the model during training (images for validation model's efficiency); it was not the part of the training datapod_telegrafem_2019 - folder with pod_telegrafem.jpg image divided into 196 tiles (256 x 265 pixels each) - tiles were presented to the model during inference (images for evaluation model's robustness)Dataset was created as described below.Firstly, the orthophotomaps were collected from Kielce Geoportal (https://gis.kielce.eu). Kielce Geoportal offers a .pst recent map from April 2019. It is an orthophotomap with a resolution of 5 x 5 pixels, constructed from a plane flight at 700 meters over ground height, taken with a camera for vertical photos. Downloading was done by WMS in open-source QGIS software (https://www.qgis.org), as a 1:500 scale map, then converted to a 1200 dpi PNG image.Secondly, the map from Wietrznia residential district was manually labelled, also in QGIS, in the same scope, as the orthophotomap. Annotation based on land cover map information was also obtained from Kielce Geoportal. There are two classes - residential building and surrounding. Second map, from Pod Telegrafem district was not annotated, since it was used in the testing phase and imitates situation, where there is no annotation for the new data presented to the model.Next, the images was converted to an RGB JPG images, and the annotation map was converted to 8-bit GRAY PNG image.Finally, Wietrznia data files were tiled to 512 x 512 pixels tiles, in Python PIL library. Tiles with no information or a relatively small amount of information (only white background or mostly white background) were manually removed. So, from the 29113 x 15938 pixels orthophotomap, only 810 tiles with corresponding annotations were left, ready to train the machine learning model for the semantic segmentation task. Pod Telegrafem orthophotomap was tiled with no manual removing, so from the 7168 x 7168 pixels ortophotomap were created 197 tiles with 256 x 256 pixels resolution. There was also image of one residential building, used for model's validation during training phase, it was not the part of the training data, but was a part of Wietrznia residential area. It was 2048 x 2048 pixel ortophotomap, tiled to 16 tiles 256 x 265 pixels each.

  20. o

    10m Annual Land Use Land Cover (9-class)

    • registry.opendata.aws
    • collections.sentinel-hub.com
    Updated Jul 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Impact Observatory (2023). 10m Annual Land Use Land Cover (9-class) [Dataset]. https://registry.opendata.aws/io-lulc/
    Explore at:
    Dataset updated
    Jul 6, 2023
    Dataset provided by
    <a href="https://www.impactobservatory.com/">Impact Observatory</a>
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset, produced by Impact Observatory, Microsoft, and Esri, displays a global map of land use and land cover (LULC) derived from ESA Sentinel-2 imagery at 10 meter resolution for the years 2017 - 2023. Each map is a composite of LULC predictions for 9 classes throughout the year in order to generate a representative snapshot of each year. This dataset was generated by Impact Observatory, which used billions of human-labeled pixels (curated by the National Geographic Society) to train a deep learning model for land classification. Each global map was produced by applying this model to the Sentinel-2 annual scene collections from the Mircosoft Planetary Computer. Each of the maps has an assessed average accuracy of over 75%. These maps have been improved from Impact Observatory’s previous release and provide a relative reduction in the amount of anomalous change between classes, particularly between “Bare” and any of the vegetative classes “Trees,” “Crops,” “Flooded Vegetation,” and “Rangeland”. This updated time series of annual global maps is also re-aligned to match the ESA UTM tiling grid for Sentinel-2 imagery. Data can be accessed directly from the Registry of Open Data on AWS, from the STAC 1.0.0 endpoint, or from the IO Store for a specific Area of Interest (AOI).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren; Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren (2021). GIS Resource Compilation Map Package - Applications of Machine Learning Techniques to Geothermal Play Fairway Analysis in the Great Basin Region, Nevada [Dataset]. http://doi.org/10.15121/1897037

GIS Resource Compilation Map Package - Applications of Machine Learning Techniques to Geothermal Play Fairway Analysis in the Great Basin Region, Nevada

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jun 1, 2021
Dataset provided by
USDOE Office of Energy Efficiency and Renewable Energy (EERE), Renewable Power Office. Geothermal Technologies Program (EE-4G)
Geothermal Data Repository
Nevada Bureau of Mines and Geology
Authors
Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren; Stephen Brown; Michael Fehler; Mark Coolbaugh; Sven Treitel; James Faulds; Bridget Ayling; Cary Lindsey; Rachel Micander; Eli Mlawsky; Connor Smith; John Queen; Chen Gu; John Akerley; Jacob DeAngelo; Jonathan Glen; Drew Siler; Erick Burns; Ian Warren
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Area covered
Great Basin, Nevada
Description

This submission contains an ESRI map package (.mpk) with an embedded geodatabase for GIS resources used or derived in the Nevada Machine Learning project, meant to accompany the final report. The package includes layer descriptions, layer grouping, and symbology. Layer groups include: new/revised datasets (paleo-geothermal features, geochemistry, geophysics, heat flow, slip and dilation, potential structures, geothermal power plants, positive and negative test sites), machine learning model input grids, machine learning models (Artificial Neural Network (ANN), Extreme Learning Machine (ELM), Bayesian Neural Network (BNN), Principal Component Analysis (PCA/PCAk), Non-negative Matrix Factorization (NMF/NMFk) - supervised and unsupervised), original NV Play Fairway data and models, and NV cultural/reference data.

See layer descriptions for additional metadata. Smaller GIS resource packages (by category) can be found in the related datasets section of this submission. A submission linking the full codebase for generating machine learning output models is available through the "Related Datasets" link on this page, and contains results beyond the top picks present in this compilation.

Search
Clear search
Close search
Google apps
Main menu