100+ datasets found
  1. U

    Map feature extraction challenge training and validation data

    • data.usgs.gov
    • catalog.data.gov
    Updated Jul 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Margaret Goldman; Joshua Rosera; Graham Lederer; Garth Graham; Asitang Mishra; Alice Yepremyan (2025). Map feature extraction challenge training and validation data [Dataset]. http://doi.org/10.5066/P9FXSPT1
    Explore at:
    Dataset updated
    Jul 25, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Margaret Goldman; Joshua Rosera; Graham Lederer; Garth Graham; Asitang Mishra; Alice Yepremyan
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    2022 - 2023
    Description

    Extracting useful and accurate information from scanned geologic and other earth science maps is a time-consuming and laborious process involving manual human effort. To address this limitation, the USGS partnered with the Defense Advanced Research Projects Agency (DARPA) to run the AI for Critical Mineral Assessment Competition, soliciting innovative solutions for automatically georeferencing and extracting features from maps. The competition opened for registration in August 2022 and concluded in December 2022. Training, validation, and evaluation data from the map feature extraction challenge are provided here, as well as competition details and a baseline solution. The data were derived from published sources and are provided to the public to support continued development of automated georeferencing and feature extraction tools. References for all maps are included with the data.

  2. f

    Data from: Research on map emotional semantics using deep learning approach

    • tandf.figshare.com
    jpeg
    Updated Feb 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daping Xi; Xini Hu; Lin Yang; Nai Yang; Yanzhu Liu; Han Jiang (2024). Research on map emotional semantics using deep learning approach [Dataset]. http://doi.org/10.6084/m9.figshare.22134351.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Feb 9, 2024
    Dataset provided by
    Taylor & Francis
    Authors
    Daping Xi; Xini Hu; Lin Yang; Nai Yang; Yanzhu Liu; Han Jiang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The main purpose of the research on map emotional semantics is to describe and express the emotional responses caused by people observing images through computer technology. Nowadays, map application scenarios tend to be diversified, and the increasing demand for emotional information of map users bring new challenges for cartography. However, the lack of evaluation of emotions in the traditional map drawing process makes it difficult for the resulting maps to reach emotional resonance with map users. The core of solving this problem is to quantify the emotional semantics of maps, it can help mapmakers to better understand map emotions and improve user satisfaction. This paper aims to perform the quantification of map emotional semantics by applying transfer learning methods and the efficient computational power of convolutional neural networks (CNN) to establish the correspondence between visual features and emotions. The main contributions of this paper are as follows: (1) a Map Sentiment Dataset containing five discrete emotion categories; (2) three different CNNs (VGG16, VGG19, and InceptionV3) are applied for map sentiment classification task and evaluated by accuracy performance; (3) six different parameter combinations to conduct experiments that would determine the best combination of learning rate and batch size; and (4) the analysis of visual variables that affect the sentiment of a map according to the chart and visualization results. The experimental results reveal that the proposed method has good accuracy performance (around 88%) and that the emotional semantics of maps have some general rules. A Map Sentiment Dataset with five discrete emotions is constructedMap emotional semantics are classified by deep learning approachesVisual variables Influencing map sentiment are analyzed. A Map Sentiment Dataset with five discrete emotions is constructed Map emotional semantics are classified by deep learning approaches Visual variables Influencing map sentiment are analyzed.

  3. Terrain Map Image Pairs

    • kaggle.com
    zip
    Updated Nov 19, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Pappas (2017). Terrain Map Image Pairs [Dataset]. https://www.kaggle.com/tpapp157/terrainimagepairs
    Explore at:
    zip(294648270 bytes)Available download formats
    Dataset updated
    Nov 19, 2017
    Authors
    Thomas Pappas
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    I created this dataset to train a network to generate realistic looking terrain maps based on simple color region codings. You can see some samples of the result here: https://www.reddit.com/r/MachineLearning/comments/7dwj1q/p_fun_project_mspaint_to_terrain_map_with_gan/

    Content

    This is a dataset of 1360 image pairs. The ground truth image is a random 512x512 pixel crop of terrain from a global map of the Earth. The second image is a color quantized and mode filtered version of the base image to create a very simple terrain region mapping composed of five colors. The five colors correspond to terrain types as follows: blue - water, grey - mountains, green - forest/jungle/marshland, yellow - desert/grassland/glacier, brown - hills/badlands.

  4. Geospatial data for the Vegetation Mapping Inventory Project of Pictured...

    • catalog.data.gov
    Updated Nov 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Geospatial data for the Vegetation Mapping Inventory Project of Pictured Rocks National Lakeshore [Dataset]. https://catalog.data.gov/dataset/geospatial-data-for-the-vegetation-mapping-inventory-project-of-pictured-rocks-national-la
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    Pictured Rocks
    Description

    The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. We converted the photointerpreted data into a format usable in a geographic information system (GIS) by employing three fundamental processes: (1) orthorectify, (2) digitize, and (3) develop the geodatabase. All digital map automation was projected in Universal Transverse Mercator (UTM), Zone 16, using the North American Datum of 1983 (NAD83). Orthorectify: We orthorectified the interpreted overlays by using OrthoMapper, a softcopy photogrammetric software for GIS. One function of OrthoMapper is to create orthorectified imagery from scanned and unrectified imagery (Image Processing Software, Inc., 2002). The software features a method of visual orientation involving a point-and-click operation that uses existing orthorectified horizontal and vertical base maps. Of primary importance to us, OrthoMapper also has the capability to orthorectify the photointerpreted overlays of each photograph based on the reference information provided. Digitize: To produce a polygon vector layer for use in ArcGIS (Environmental Systems Research Institute [ESRI], Redlands, California), we converted each raster-based image mosaic of orthorectified overlays containing the photointerpreted data into a grid format by using ArcGIS. In ArcGIS, we used the ArcScan extension to trace the raster data and produce ESRI shapefiles. We digitally assigned map-attribute codes (both map-class codes and physiognomic modifier codes) to the polygons and checked the digital data against the photointerpreted overlays for line and attribute consistency. Ultimately, we merged the individual layers into a seamless layer. Geodatabase: At this stage, the map layer has only map-attribute codes assigned to each polygon. To assign meaningful information to each polygon (e.g., map-class names, physiognomic definitions, links to NVCS types), we produced a feature-class table, along with other supportive tables and subsequently related them together via an ArcGIS Geodatabase. This geodatabase also links the map to other feature-class layers produced from this project, including vegetation sample plots, accuracy assessment (AA) sites, aerial photo locations, and project boundary extent. A geodatabase provides access to a variety of interlocking data sets, is expandable, and equips resource managers and researchers with a powerful GIS tool.

  5. Motor Vehicle Use Map: Trails (Feature Layer)

    • catalog.data.gov
    • gimi9.com
    • +6more
    Updated Jul 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2025). Motor Vehicle Use Map: Trails (Feature Layer) [Dataset]. https://catalog.data.gov/dataset/motor-vehicle-use-map-trails-feature-layer-b6fe4
    Explore at:
    Dataset updated
    Jul 11, 2025
    Dataset provided by
    U.S. Department of Agriculture Forest Servicehttp://fs.fed.us/
    Description

    The feature class indicates the specific types of motorized vehicles allowed on the designated routes and their seasons of use. The feature class is designed to be consistent with the MVUM (Motor Vehicle Use Map). It is compiled from the GIS Data Dictionary data and Infra tabular data that the administrative units have prepared for the creation of their MVUMs. Only trails with the symbol value of 5-12, 16, 17 are Forest Service System trails and contain data concerning their availability for motorized use. This data is published and refreshed on a unit by unit basis as needed. Individual unit's data must be verified and proved consistent with the published MVUMs prior to publication in the EDW. Click this link for full metadata description: Metadata _

  6. a

    OpenStreetMap - Natural Features Map

    • hub.arcgis.com
    • goa-state-gis-esriindia1.hub.arcgis.com
    • +1more
    Updated Mar 25, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GIS Online (2022). OpenStreetMap - Natural Features Map [Dataset]. https://hub.arcgis.com/maps/79f5e865b1fd464184c372d96621e2dc
    Explore at:
    Dataset updated
    Mar 25, 2022
    Dataset authored and provided by
    GIS Online
    Area covered
    Description

    This web map shows natural features point and polygon layers from OSM (OpenStreetMap) in India.OSM is a collaborative, open project to create a freely available and editable map of the world. Geographic information about streets, rivers, borders, points of interest and areas are collected worldwide and stored in a freely accessible database. Everyone can participate and contribute to OSM. The geographic information available on OSM relies entirely on volunteers or contributors.The attributes are given below:BeachCave EntranceCliffGlacierPeakSpringTreeVolcanoThese map layers are offered by Esri India Content. The content team updates the map layers quarterly. If you have any questions or comments, please let us know via content@esri.in.

  7. Tiled vector data model for the geographical features of symbolized maps

    • plos.figshare.com
    txt
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lin Li; Wei Hu; Haihong Zhu; You Li; Hang Zhang (2023). Tiled vector data model for the geographical features of symbolized maps [Dataset]. http://doi.org/10.1371/journal.pone.0176387
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Lin Li; Wei Hu; Haihong Zhu; You Li; Hang Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Electronic maps (E-maps) provide people with convenience in real-world space. Although web map services can display maps on screens, a more important function is their ability to access geographical features. An E-map that is based on raster tiles is inferior to vector tiles in terms of interactive ability because vector maps provide a convenient and effective method to access and manipulate web map features. However, the critical issue regarding rendering tiled vector maps is that geographical features that are rendered in the form of map symbols via vector tiles may cause visual discontinuities, such as graphic conflicts and losses of data around the borders of tiles, which likely represent the main obstacles to exploring vector map tiles on the web. This paper proposes a tiled vector data model for geographical features in symbolized maps that considers the relationships among geographical features, symbol representations and map renderings. This model presents a method to tailor geographical features in terms of map symbols and ‘addition’ (join) operations on the following two levels: geographical features and map features. Thus, these maps can resolve the visual discontinuity problem based on the proposed model without weakening the interactivity of vector maps. The proposed model is validated by two map data sets, and the results demonstrate that the rendered (symbolized) web maps present smooth visual continuity.

  8. National-scale cropland mapping based on spectral-temporal features and...

    • plos.figshare.com
    txt
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    François Waldner; Matthew C. Hansen; Peter V. Potapov; Fabian Löw; Terence Newby; Stefanus Ferreira; Pierre Defourny (2023). National-scale cropland mapping based on spectral-temporal features and outdated land cover information [Dataset]. http://doi.org/10.1371/journal.pone.0181911
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    François Waldner; Matthew C. Hansen; Peter V. Potapov; Fabian Löw; Terence Newby; Stefanus Ferreira; Pierre Defourny
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The lack of sufficient ground truth data has always constrained supervised learning, thereby hindering the generation of up-to-date satellite-derived thematic maps. This is all the more true for those applications requiring frequent updates over large areas such as cropland mapping. Therefore, we present a method enabling the automated production of spatially consistent cropland maps at the national scale, based on spectral-temporal features and outdated land cover information. Following an unsupervised approach, this method extracts reliable calibration pixels based on their labels in the outdated map and their spectral signatures. To ensure spatial consistency and coherence in the map, we first propose to generate seamless input images by normalizing the time series and deriving spectral-temporal features that target salient cropland characteristics. Second, we reduce the spatial variability of the class signatures by stratifying the country and by classifying each stratum independently. Finally, we remove speckle with a weighted majority filter accounting for per-pixel classification confidence. Capitalizing on a wall-to-wall validation data set, the method was tested in South Africa using a 16-year old land cover map and multi-sensor Landsat time series. The overall accuracy of the resulting cropland map reached 92%. A spatially explicit validation revealed large variations across the country and suggests that intensive grain-growing areas were better characterized than smallholder farming systems. Informative features in the classification process vary from one stratum to another but features targeting the minimum of vegetation as well as short-wave infrared features were consistently important throughout the country. Overall, the approach showed potential for routinely delivering consistent cropland maps over large areas as required for operational crop monitoring.

  9. Maps of Vegetation Types and Physiographic Features, Imnavait Creek, Alaska...

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Maps of Vegetation Types and Physiographic Features, Imnavait Creek, Alaska - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/maps-of-vegetation-types-and-physiographic-features-imnavait-creek-alaska-b3509
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Area covered
    Alaska
    Description

    This dataset provides the spatial distribution of vegetation types, soil carbon, and physiographic features in the Imnavait Creek area, Alaska. Specific attributes include vegetation, percent water, glacial geology, soil carbon, a digital elevation model (DEM), surficial geology and surficial geomorphology. Data are also provided on the research grids for georeferencing. The map data are from a variety of sources and encompass the period 1970-06-01 to 2015-08-31.

  10. 🌎 Location Intelligence Data | From Google Map

    • kaggle.com
    zip
    Updated Apr 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Azhar Saleem (2024). 🌎 Location Intelligence Data | From Google Map [Dataset]. https://www.kaggle.com/datasets/azharsaleem/location-intelligence-data-from-google-map
    Explore at:
    zip(1911275 bytes)Available download formats
    Dataset updated
    Apr 21, 2024
    Authors
    Azhar Saleem
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    👨‍💻 Author: Azhar Saleem

    "https://github.com/azharsaleem18" target="_blank"> https://img.shields.io/badge/GitHub-Profile-blue?style=for-the-badge&logo=github" alt="GitHub Profile"> "https://www.kaggle.com/azharsaleem" target="_blank"> https://img.shields.io/badge/Kaggle-Profile-blue?style=for-the-badge&logo=kaggle" alt="Kaggle Profile"> "https://www.linkedin.com/in/azhar-saleem/" target="_blank"> https://img.shields.io/badge/LinkedIn-Profile-blue?style=for-the-badge&logo=linkedin" alt="LinkedIn Profile">
    "https://www.youtube.com/@AzharSaleem19" target="_blank"> https://img.shields.io/badge/YouTube-Profile-red?style=for-the-badge&logo=youtube" alt="YouTube Profile"> "https://www.facebook.com/azhar.saleem1472/" target="_blank"> https://img.shields.io/badge/Facebook-Profile-blue?style=for-the-badge&logo=facebook" alt="Facebook Profile"> "https://www.tiktok.com/@azhar_saleem18" target="_blank"> https://img.shields.io/badge/TikTok-Profile-blue?style=for-the-badge&logo=tiktok" alt="TikTok Profile">
    "https://twitter.com/azhar_saleem18" target="_blank"> https://img.shields.io/badge/Twitter-Profile-blue?style=for-the-badge&logo=twitter" alt="Twitter Profile"> "https://www.instagram.com/azhar_saleem18/" target="_blank"> https://img.shields.io/badge/Instagram-Profile-blue?style=for-the-badge&logo=instagram" alt="Instagram Profile"> "mailto:azharsaleem6@gmail.com"> https://img.shields.io/badge/Email-Contact%20Me-red?style=for-the-badge&logo=gmail" alt="Email Contact">

    Dataset Overview

    Welcome to the Google Places Comprehensive Business Dataset! This dataset has been meticulously scraped from Google Maps and presents extensive information about businesses across several countries. Each entry in the dataset provides detailed insights into business operations, location specifics, customer interactions, and much more, making it an invaluable resource for data analysts and scientists looking to explore business trends, geographic data analysis, or consumer behaviour patterns.

    Key Features

    • Business Details: Includes unique identifiers, names, and contact information.
    • Geolocation Data: Precise latitude and longitude for pinpointing business locations on a map.
    • Operational Timings: Detailed opening and closing hours for each day of the week, allowing analysis of business activity patterns.
    • Customer Engagement: Data on review counts and ratings, offering insights into customer satisfaction and business popularity.
    • Additional Attributes: Links to business websites, time zone information, and country-specific details enrich the dataset for comprehensive analysis.

    Potential Use Cases

    This dataset is ideal for a variety of analytical projects, including: - Market Analysis: Understand business distribution and popularity across different regions. - Customer Sentiment Analysis: Explore relationships between customer ratings and business characteristics. - Temporal Trend Analysis: Analyze patterns of business activity throughout the week. - Geospatial Analysis: Integrate with mapping software to visualise business distribution or cluster businesses based on location.

    Dataset Structure

    The dataset contains 46 columns, providing a thorough profile for each listed business. Key columns include:

    • business_id: A unique Google Places identifier for each business, ensuring distinct entries.
    • phone_number: The contact number associated with the business. It provides a direct means of communication.
    • name: The official name of the business as listed on Google Maps.
    • full_address: The complete postal address of the business, including locality and geographic details.
    • latitude: The geographic latitude coordinate of the business location, useful for mapping and spatial analysis.
    • longitude: The geographic longitude coordinate of the business location.
    • review_count: The total number of reviews the business has received on Google Maps.
    • rating: The average user rating out of 5 for the business, reflecting customer satisfaction.
    • timezone: The world timezone the business is located in, important for temporal analysis.
    • website: The official website URL of the business, providing further information and contact options.
    • category: The category or type of service the business provides, such as restaurant, museum, etc.
    • claim_status: Indicates whether the business listing has been claimed by the owner on Google Maps.
    • plus_code: A sho...
  11. n

    Satellite images and road-reference data for AI-based road mapping in...

    • data.niaid.nih.gov
    • dataone.org
    • +1more
    zip
    Updated Apr 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance (2024). Satellite images and road-reference data for AI-based road mapping in Equatorial Asia [Dataset]. http://doi.org/10.5061/dryad.bvq83bkg7
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    James Cook University
    Vancouver Island University
    Authors
    Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Asia
    Description

    For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea). Methods

    1. INPUT 200 SATELLITE IMAGES

    The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limited human intervention. Sloan et al. (2023) present a map indicating the various areas of Equatorial Asia from which these images were sourced.
    IMAGE NAMING CONVENTION A common naming convention applies to satellite images’ file names: XX##.png where:

    XX – denotes the geographical region / major island of Equatorial Asia of the image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])

    – denotes the ith image for a given geographical region / major island amongst the original 200 images, e.g., bo1, bo2, bo3…

    1. INTERPRETING ROAD FEATURES IN THE IMAGES For each of the 200 input satellite images, its road was visually interpreted and manually digitized to create a reference image dataset by which to train, validate, and test AI road-mapping models, as detailed in Sloan et al. (2023). The reference dataset of road features was digitized using the ‘pen tool’ in Adobe Photoshop. The pen’s ‘width’ was held constant over varying scales of observation (i.e., image ‘zoom’) during digitization. Consequently, at relatively small scales at least, digitized road features likely incorporate vegetation immediately bordering roads. The resultant binary (Road / Not Road) reference images were saved as PNG images with the same image dimensions as the original 200 images.

    2. IMAGE TILES AND REFERENCE DATA FOR MODEL DEVELOPMENT

    The 200 satellite images and the corresponding 200 road-reference images were both subdivided (aka ‘sliced’) into thousands of smaller image ‘tiles’ of 256x256 pixels each. Subsequent to image subdivision, subdivided images were also rotated by 90, 180, or 270 degrees to create additional, complementary image tiles for model development. In total, 8904 image tiles resulted from image subdivision and rotation. These 8904 image tiles are the main data of interest disseminated here. Each image tile entails the true-colour satellite image (256x256 pixels) and a corresponding binary road reference image (Road / Not Road).
    Of these 8904 image tiles, Sloan et al. (2023) randomly selected 80% for model training (during which a model ‘learns’ to recognize road features in the input imagery), 10% for model validation (during which model parameters are iteratively refined), and 10% for final model testing (during which the final accuracy of the output road map is assessed). Here we present these data in two folders accordingly:

    'Training’ – contains 7124 image tiles used for model training in Sloan et al. (2023), i.e., 80% of the original pool of 8904 image tiles. ‘Testing’– contains 1780 image tiles used for model validation and model testing in Sloan et al. (2023), i.e., 20% of the original pool of 8904 image tiles, being the combined set of image tiles for model validation and testing in Sloan et al. (2023).

    IMAGE TILE NAMING CONVENTION A common naming convention applies to image tiles’ directories and file names, in both the ‘training’ and ‘testing’ folders: XX##_A_B_C_DrotDDD where

    XX – denotes the geographical region / major island of Equatorial Asia of the original input 1920x886 pixel image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])

    – denotes the ith image for a given geographical region / major island amongst the original 200 images, e.g., bo1, bo2, bo3…

    A, B, C and D – can all be ignored. These values, which are one of 0, 256, 512, 768, 1024, 1280, 1536, and 1792, are effectively ‘pixel coordinates’ in the corresponding original 1920x886-pixel input image. They were recorded within the names of image tiles’ sub-directories and file names merely to ensure that names/directory were uniquely named)

    rot – implies an image rotation. Not all image tiles are rotated, so ‘rot’ will appear only occasionally.

    DDD – denotes the degree of image-tile rotation, e.g., 90, 180, 270. Not all image tiles are rotated, so ‘DD’ will appear only occasionally.

    Note that the designator ‘XX##’ is directly equivalent to the filenames of the corresponding 1920x886-pixel input satellite images, detailed above. Therefore, each image tiles can be ‘matched’ with its parent full-scale satellite image. For example, in the ‘training’ folder, the subdirectory ‘Bo12_0_0_256_256’ indicates that its image tile therein (also named ‘Bo12_0_0_256_256’) would have been sourced from the full-scale image ‘Bo12.png’.

  12. c

    Land Cover Map (2023)

    • data.catchmentbasedapproach.org
    • data.castco.org
    • +1more
    Updated Jul 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Rivers Trust (2024). Land Cover Map (2023) [Dataset]. https://data.catchmentbasedapproach.org/maps/88d5846dfe344746906ce93af2b1e1b0
    Explore at:
    Dataset updated
    Jul 23, 2024
    Dataset authored and provided by
    The Rivers Trust
    Area covered
    Description

    This is a web map service (WMS) for the 10-metre Land Cover Map 2023. The map presents the and surface classified into 21 UKCEH land cover classes, based upon Biodiversity Action Plan broad habitats.UKCEH’s automated land cover algorithms classify 10 m pixels across the whole of UK. Training data were automatically selected from stable land covers over the interval of 2020 to 2022. A Random Forest classifier used these to classify four composite images representing per season median surface reflectance. Seasonal images were integrated with context layers (e.g., height, aspect, slope, coastal proximity, urban proximity and so forth) to reduce confusion among classes with similar spectra.Land cover was validated by organising the 10 m pixel classification into a land parcel framework (the LCM2023 classified land parcels product). The classified land parcels were compared to known land cover producing a confusion matrix to determine overall and per class accuracy.

  13. A

    Unpublished Digital Bedrock Geologic Map of Saint-Gaudens National Historic...

    • data.amerigeoss.org
    xml, zip
    Updated Aug 21, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2015). Unpublished Digital Bedrock Geologic Map of Saint-Gaudens National Historic Site and Vicinity, New Hampshire and Vermont (NPS, GRD, GRI, SAGA, SAGA digital map) adapted from a U.S. Geological Survey Scientific Investigation Maps by Walsh, Gregory J., et al. [Dataset]. https://data.amerigeoss.org/sl/dataset/unpublished-digital-bedrock-geologic-map-of-saint-gaudens-national-historic-site-and-vicinity-n
    Explore at:
    zip, xmlAvailable download formats
    Dataset updated
    Aug 21, 2015
    Dataset provided by
    United States
    Area covered
    Vermont
    Description

    The Unpublished Digital Bedrock Geologic Map of Saint-Gaudens National Historic Site and Vicinity, New Hampshire and Vermont is composed of GIS data layers and GIS tables in a 10.1 file geodatabase (saga_geology.gdb), a 10.1 ArcMap (.MXD) map document (saga_geology.mxd), individual 10.1 layer (.LYR) files for each GIS data layer, an ancillary map information (.PDF) document (saga_geology.pdf) which contains source map unit descriptions, as well as other source map text, figures and tables, metadata in FGDC text (.TXT) and FAQ (.HTML) formats, and a GIS readme file (saga_gis_readme.pdf). Please read the saga_gis_readme.pdf for information pertaining to the proper extraction of the file geodatabase and other map files. To request GIS data in ESRI 10.1 shapefile format contact Stephanie O Meara (stephanie.omeara@colostate.edu; see contact information below). The data is also available as a 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. Google Earth software is available for free at: http://www.google.com/earth/index.html. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (saga_metadata_faq.html; available at http://nrdata.nps.gov/geology/gri_data/gis/saga/saga_metadata_faq.html). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:24,000 and United States National Map Accuracy Standards features are within (horizontally) 12.2 meters or 40 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.2. (available at: http://science.nature.nps.gov/im/inventory/geology/GeologyGISDataModel.cfm). The GIS data projection is NAD83, UTM Zone 18N, however, for the KML/KMZ format the data is projected upon export to WGS84 Geographic, the native coordinate system used by Google Earth. The data is within the area of interest of Saint-Gaudens National Historic Site.

  14. V

    Base Map Update Years: 2020-2024

    • data.virginia.gov
    • catalog.data.gov
    Updated Jul 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Loudoun County (2025). Base Map Update Years: 2020-2024 [Dataset]. https://data.virginia.gov/dataset/base-map-update-years-2020-2024
    Explore at:
    html, arcgis geoservices rest apiAvailable download formats
    Dataset updated
    Jul 25, 2025
    Dataset provided by
    Loudoun County GIS
    Authors
    Loudoun County
    Description

    Loudoun County, Office of Mapping and Geographic Information (OMAGI) has a seamless countywide base map geodatabase that can be used as a reference when mapping all other data. Base Map data layers include planimetric (buildings, roads, miscellaneous cultural features), environmental (hydrology, forest cover), and topographic (elevation contours and spot heights) features. Each base map feature has an update date field associated with it which shows the year when that particular feature was last updated.

    The countywide remapping project, conducted in two phases, was undertaken to produce the current data set. Phase I, using 2002 Virginia Base Mapping Program digital scanned imagery and Phase II, using 2004 scanned aerial photography, comprises the initial re-map effort. The initial project was completed in 2005 and now undergoes annual updates. The first annual update, Phase III, was​ derived from 2005 imagery and completed in fall 2006. The second annual update, Phase IV, was derived from 2007 imagery and completed in spring 2007. Phase V of the base map updates was completed in late 2008. Most recent updates were derived from 2024 imagery and completed in Summer 2025.

    Disclaimer Loudoun County is not liable for any use of or reliance upon this map or data, or any information contained herein. While reasonable efforts have been made to obtain accurate data, the County makes no warranty, expressed or implied, as to its accuracy, completeness, or fitness for use of any purpose.

  15. The lightweight methods based on DETR.

    • plos.figshare.com
    xls
    Updated Sep 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li (2025). The lightweight methods based on DETR. [Dataset]. http://doi.org/10.1371/journal.pone.0332714.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 24, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Guoguang Hua; Fangfang Wu; Guangzhao Hao; Chenbo Xia; Li Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Small object detection is an essential but challenging task in computer vision. Transformer-based algorithms have demonstrated remarkable performance in the domain of computer vision tasks. Nevertheless, they suffer from inadequate feature extraction for small objects. Additionally, they face difficulties in deployment on resource-constrained platforms due to their heavy computational burden. To tackle these problems, an efficient local-global fusion Transformer (ELFT) is proposed for small object detection, which is based on attention and grouping strategy. Specifically, we first design an efficient local-global fusion attention (ELGFA) mechanism to extract sufficient location features and integrate detailed information from feature maps, thereby promoting the accuracy. Besides, we present a grouped feature update module (GFUM) to reduce computational complexity by alternately updating high-level and low-level features within each group. Furthermore, the broadcast context module (CB) is introduced to obtain richer context information. It further enhances the ability to detect small objects. Extensive experiments conducted on three benchmarks, i.e. Remote Sensing Object Detection (RSOD), NWPU VHR-10 and PASCAL VOC2007, achieving 95.8%, 94.3% and 85.2% in mean average precision (mAP), respectively. Compared to DINO, the number of parameters is reduced by 10.4%, and the floating point operations (FLOPs) are reduced by 22.7%. The experimental results demonstrate the efficacy of ELFT in small object detection tasks, while maintaining an attractive level of computational complexity.

  16. GAP-USGS 15 West Webmap

    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • hub.arcgis.com
    Updated Jul 1, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Conservation Program (2015). GAP-USGS 15 West Webmap [Dataset]. https://arc-gis-hub-home-arcgishub.hub.arcgis.com/maps/6add52a180354198a2d60285a603ccb2
    Explore at:
    Dataset updated
    Jul 1, 2015
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Conservation Program
    Area covered
    Description

    This webmap features the USGS GAP application of the vegetation cartography design based on NVCS mapping being done at the Alliance level by the California Native Plant Society (CNPS), the California Dept of Fish and Game (CDFG), and the US National Park Service, combined with Ecological Systems Level mapping being done by USGS GAP, Landfire and Natureserve. Although the latter are using 3 different approaches to mapping, this project adopted a common cartography and a common master crossover in order to allow them to be used intercheangably as complements to the detailed NVCS Alliance & Macrogroup Mapping being done in Calif by the California Native Plant Society (CNPS) and Calif Dept of Fish & Wildlife (CDFW). A primary goal of this project was to develop ecological layers to use as overlays on top of high-resolution imagery, in order to help interpret and better understand the natural landscape. You can see the source national GAP rasters by clicking on either of the "USGS GAP Landcover Source RASTER" layers at the bottom of the contents list.Using polygons has several advantages: Polygons are how most conservation plans and land decisions/managment are done so polygon-based outputs are more directly useable in management and planning. Unlike rasters, Polygons permit webmaps with clickable links to provide additional information about that ecological community. At the analysis level, polygons allow vegetation/ecological systems depicted to be enriched with additional ecological attributes for each polygon from multiple overlay sources be they raster or vector. In this map, the "Gap Mac base-mid scale" layers are enriched with links to USGS/USNVC macrogroup summary reports, and the "Gap Eco base scale" layers are enriched with links to the Naturserve Ecological Systems summary reports.Comparsion with finer scale ground ecological mapping is provided by the "Ecol Overlay" layers of Alliance and Macrogroup Mapping from CNPS/CDFW. The CNPS Vegetation Program has worked for over 15 years to provide standards and tools for identifying and representing vegetation, as an important feature of California's natural heritage and biodiversity. Many knowledgeable ecologists and botanists support the program as volunteers and paid staff. Through grants, contracts, and grass-roots efforts, CNPS collects field data and compiles information into reports, manuals, and maps on California's vegetation, ecology and rare plants in order to better protect and manage them. We provide these services to governmental, non-governmental and other organizations, and we collaborate on vegetation resource assessment projects around the state. CNPS is also the publisher of the authoritative Manual of California Vegetation, you can purchase a copy HERE. To support the work of the CNPS, please JOIN NOW and become a member!The CDFG Vegetation Classification and Mapping Program develops and maintains California's expression of the National Vegetation Classification System. We implement its use through assessment and mapping projects in high-priority conservation and management areas, through training programs, and through working continuously on best management practices for field assessment, classification of vegetation data, and fine-scale vegetation mapping.HOW THE OVERLAY LAYERS WERE CREATED:Nserve and GapLC Sources: Early shortcomings in the NVC standard led to Natureserve's development of a mid-scale mapping-friendly "Ecological Systems" standard roughly corresponding to the "Group" level of the NVC, which facilitated NVC-based mapping of entire continents. Current scientific work is leading to the incorporation of Ecological Systems into the NVC as group and macrogroup concepts are revised. Natureserve and Gap Ecological Systems layers differ slightly even though both were created from 30m landsat data and both follow the NVC-related Ecological Systems Classification curated by Natureserve. In either case, the vector overlay was created by first enforcing a .3ha minimum mapping unit, that required deleting any classes consisting of fewer than 4 contiguous landsat cells either side-side or cornerwise. This got around the statistical problem of numerous single-cell classes with types that seemed improbable given their matrix, and would have been inaccurate to use as an n=1 sample compared to the weak but useable n=4 sample. A primary goal in this elimination was to best preserve riparian and road features that might only be one pixel wide, hence the use of cornerwise contiguous groupings. Eliminated cell groups were absorbed into whatever neighboring class they shared the longest boundary with. The remaining raster groups were vectorized with light simplification to smooth out the stairstep patterns of raster data and hopefully improve the fidelity of the boundaries with the landscape. The resultant vectors show a range of fidelity with the landscape, where there is less apparent fidelity it must be remembered that ecosystems are normally classified with a mixture of visible and non-visible characteristics including soil, elevation and slope. Boundaries can be assigned based on the difference between 10% shrub cover and 20% shrub cover. Often large landscape areas would create "godzilla" polygons of more than 50,000 vertices, which can affect performance. These were eliminated using SIMPLIFY POLYGONS to reduce vertex spacing from 30m down to 50-60m where possible. Where not possible DICE was used, which bisects all large polygons with arbitrary internal divisions until no polygon has more than 50,000 vertices. To create midscale layers, ecological systems were dissolved into the macrogroups that they belonged to and resymbolized on macrogroup. This was another frequent source for godzillas as larger landscape units were delineate, so simplify and dice were then run again. Where the base ecol system tiles could only be served up by individual partition tile, macrogroups typically exhibited a 10-1 or 20-1 reduction in feature count allowing them to be assembled into single integrated map services by region, ie NW, SW. CNPS / CDFW / National Park Service Sources: (see also base service definition page) Unlike the Landsat-based raster modelling of the Natureserve and Gap national ecological systems, the CNPS/CDFW/NPS data date back to the origin of the National Vegetation Classification effort to map the US national parks in the mid 1990's.
    These mapping efforts are a hybrid of photo-interpretation, satellite and corollary data to create draft ecological land units, which are then sampled by field crews and traditional vegetation plot surveys to quantify and analyze vegetation composition and distribution into the final vector boundaries of the formal NVC classes identified and classified. As such these are much more accurate maps, but the tradeoff is they are only done on one field project area at a time so there is not yet a national or even statewide coverage of these detailed maps.
    However, with almost 2/3d's of California already mapped, that time is approaching. The challenge in creating standard map layers for this wide diversity of projects over the 2 decades since NVC began is the extensive evolution in the NVC standard itself as well as evolution in the field techniques and tools. To create a consistent set of map layers, a master crosswalk table was built using every different classification known at the time each map was created and then crosswalking each as best as could be done into a master list of the currently-accepted classifications. This field is called the "NVC_NAME" in each of these layers, and it contains a mixture of scientific names and common names at many levels of the classification from association to division, whatever the ecologists were able to determine at the time. For further precision, this field is split out into scientific name equivalents and common name equivalents.MAP LAYER NAMING: The data sublayers in this webmap are all based on the US National Vegetation Classification, a partnership of the USGS GAP program, US Forest Service, Ecological Society of America and Natureserve, with adoption and support from many federal & state agencies and nonprofit conservation groups. The USNVC grew out of the US National Park Service Vegetation Mapping Program, a mid-1990's effort led by The Nature Conservancy, Esri and the University of California. The classification standard is now an international standard, with associated ecological mapping occurring around the world. NVC is a hierarchical taxonomy of 8 levels, from top down: Class, Subclass, Formation, Division, Macrogroup, Group, Alliance, Association. The layers in this webmap represent 4 distinct programs: 1. The California Native Plant Society/Calif Dept of Fish & Wildlife Vegetation Classification and Mapping Program (Full Description of these layers is at the CNPS MS10 Service Registration Page and Cnps MS10B Service Registration Page . 2. USGS Gap Protected Areas Database, full description at the PADUS registration page . 3. USGS Gap Landcover, full description below 4. Natureserve Ecological Systems, full description belowLAYER NAMING: All Layer names follow this pattern: Source - Program - Level - Scale - RegionSource - Program = who created the data: Nserve = Natureserve, GapLC = USGS Gap Program Landcover Data PADUS = USGS Gap Protected Areas of the USA program Cnps/Cdfw = California Native Plant Society/Calif Dept of Fish & Wildlife, often followed by the project name such as: SFhill = Sierra Foothills, Marin Open Space, MMWD = Marin Municipal Water District etc. National Parks are included and may be named by their standard 4-letter code ie YOSE = Yosemite, PORE = Point Reyes.Level: The level in the NVC Hierarchy which this layer is based on: Base = Alliances and Associations Mac =

  17. w

    Global Map App Market Research Report: By Application (Navigation,...

    • wiseguyreports.com
    Updated Aug 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Global Map App Market Research Report: By Application (Navigation, Location-Based Services, Fleet Management, Geographic Information Systems), By End Use (Personal, Commercial, Government, Transportation and Logistics), By Platform (iOS, Android, Web-Based, Windows), By Functionality (Offline Maps, Real-Time Traffic Updates, Route Optimization, Augmented Reality) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2035 [Dataset]. https://www.wiseguyreports.com/reports/map-app-market
    Explore at:
    Dataset updated
    Aug 23, 2025
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Aug 25, 2025
    Area covered
    North America, Europe, Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2023
    REGIONS COVEREDNorth America, Europe, APAC, South America, MEA
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 202429.6(USD Billion)
    MARKET SIZE 202531.2(USD Billion)
    MARKET SIZE 203552.0(USD Billion)
    SEGMENTS COVEREDApplication, End Use, Platform, Functionality, Regional
    COUNTRIES COVEREDUS, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA
    KEY MARKET DYNAMICSTechnological advancements in mapping, Increasing smartphone penetration, Rising demand for navigation services, Growing popularity of location-based services, Integration with IoT and smart devices
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDLyft, TomTom, Trimble, Microsoft, HERE Technologies, Google, Waze, Mapbox, MapQuest, Uber, Apple, OpenStreetMap, Bing Maps, ESRI, Foursquare
    MARKET FORECAST PERIOD2025 - 2035
    KEY MARKET OPPORTUNITIESLocation-based advertising solutions, Real-time traffic updates integration, Augmented reality navigation features, Enhanced user-generated content, Advanced analytics for businesses
    COMPOUND ANNUAL GROWTH RATE (CAGR) 5.2% (2025 - 2035)
  18. a

    Testing Dashboard Map Base Feature Layer (PRODUCTION)

    • alaska-coronavirus-vaccine-outreach-alaska-dhss.hub.arcgis.com
    Updated May 19, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alaska Department of Health and Social Services (2021). Testing Dashboard Map Base Feature Layer (PRODUCTION) [Dataset]. https://alaska-coronavirus-vaccine-outreach-alaska-dhss.hub.arcgis.com/datasets/testing-dashboard-map-base-feature-layer-production
    Explore at:
    Dataset updated
    May 19, 2021
    Dataset authored and provided by
    Alaska Department of Health and Social Services
    Area covered
    Description

    Feature layer generated from running the Join Features solution

  19. NHD HUC8 Shapefile: Patuxent - 02060006

    • noaa.hub.arcgis.com
    Updated Mar 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA GeoPlatform (2024). NHD HUC8 Shapefile: Patuxent - 02060006 [Dataset]. https://noaa.hub.arcgis.com/maps/19b0a767615e49d4975fe71ee0bdcaa6
    Explore at:
    Dataset updated
    Mar 27, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    NOAA GeoPlatform
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Area covered
    Description

    Access National Hydrography ProductsThe National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000-scale and exists at that scale for the whole country. This high-resolution NHD, generally developed at 1:24,000/1:12,000 scale, adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Local resolution NHD is being developed where partners and data exist. The NHD contains reach codes for networked features, flow direction, names, and centerline representations for areal water bodies. Reaches are also defined on waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria established by the Federal Geographic Data Committee.The NHD is a national framework for assigning reach addresses to water-related entities, such as industrial discharges, drinking water supplies, fish habitat areas, wild and scenic rivers. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network, much like addresses on streets. Once linked to the NHD by their reach addresses, the upstream/downstream relationships of these water-related entities--and any associated information about them--can be analyzed using software tools ranging from spreadsheets to geographic information systems (GIS). GIS can also be used to combine NHD-based network analysis with other data layers, such as soils, land use and population, to help understand and display their respective effects upon one another. Furthermore, because the NHD provides a nationally consistent framework for addressing and analysis, water-related information linked to reach addresses by one organization (national, state, local) can be shared with other organizations and easily integrated into many different types of applications to the benefit of all.Statements of attribute accuracy are based on accuracy statements made for U.S. Geological Survey Digital Line Graph (DLG) data, which is estimated to be 98.5 percent. One or more of the following methods were used to test attribute accuracy: manual comparison of the source with hardcopy plots; symbolized display of the DLG on an interactive computer graphic system; selected attributes that could not be visually verified on plots or on screen were interactively queried and verified on screen. In addition, software validated feature types and characteristics against a master set of types and characteristics, checked that combinations of types and characteristics were valid, and that types and characteristics were valid for the delineation of the feature. Feature types, characteristics, and other attributes conform to the Standards for National Hydrography Dataset (USGS, 1999) as of the date they were loaded into the database. All names were validated against a current extract from the Geographic Names Information System (GNIS). The entry and identifier for the names match those in the GNIS. The association of each name to reaches has been interactively checked, however, operator error could in some cases apply a name to a wrong reach.Points, nodes, lines, and areas conform to topological rules. Lines intersect only at nodes, and all nodes anchor the ends of lines. Lines do not overshoot or undershoot other lines where they are supposed to meet. There are no duplicate lines. Lines bound areas and lines identify the areas to the left and right of the lines. Gaps and overlaps among areas do not exist. All areas close.The completeness of the data reflects the content of the sources, which most often are the published USGS topographic quadrangle and/or the USDA Forest Service Primary Base Series (PBS) map. The USGS topographic quadrangle is usually supplemented by Digital Orthophoto Quadrangles (DOQs). Features found on the ground may have been eliminated or generalized on the source map because of scale and legibility constraints. In general, streams longer than one mile (approximately 1.6 kilometers) were collected. Most streams that flow from a lake were collected regardless of their length. Only definite channels were collected so not all swamp/marsh features have stream/rivers delineated through them. Lake/ponds having an area greater than 6 acres were collected. Note, however, that these general rules were applied unevenly among maps during compilation. Reach codes are defined on all features of type stream/river, canal/ditch, artificial path, coastline, and connector. Waterbody reach codes are defined on all lake/pond and most reservoir features. Names were applied from the GNIS database. Detailed capture conditions are provided for every feature type in the Standards for National Hydrography Dataset available online through https://prd-wret.s3-us-west-2.amazonaws.com/assets/palladium/production/atoms/files/NHD%201999%20Draft%20Standards%20-%20Capture%20conditions.PDF.Statements of horizontal positional accuracy are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For horizontal accuracy, this standard is met if at least 90 percent of points tested are within 0.02 inch (at map scale) of the true position. Additional offsets to positions may have been introduced where feature density is high to improve the legibility of map symbols. In addition, the digitizing of maps is estimated to contain a horizontal positional error of less than or equal to 0.003 inch standard error (at map scale) in the two component directions relative to the source maps. Visual comparison between the map graphic (including digital scans of the graphic) and plots or digital displays of points, lines, and areas, is used as control to assess the positional accuracy of digital data. Digital map elements along the adjoining edges of data sets are aligned if they are within a 0.02 inch tolerance (at map scale). Features with like dimensionality (for example, features that all are delineated with lines), with or without like characteristics, that are within the tolerance are aligned by moving the features equally to a common point. Features outside the tolerance are not moved; instead, a feature of type connector is added to join the features.Statements of vertical positional accuracy for elevation of water surfaces are based on accuracy statements made for U.S. Geological Survey topographic quadrangle maps. These maps were compiled to meet National Map Accuracy Standards. For vertical accuracy, this standard is met if at least 90 percent of well-defined points tested are within one-half contour interval of the correct value. Elevations of water surface printed on the published map meet this standard; the contour intervals of the maps vary. These elevations were transcribed into the digital data; the accuracy of this transcription was checked by visual comparison between the data and the map.

  20. R

    Map-Based Localization for ADAS Market Research Report 2033

    • researchintelo.com
    csv, pdf, pptx
    Updated Oct 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Intelo (2025). Map-Based Localization for ADAS Market Research Report 2033 [Dataset]. https://researchintelo.com/report/map-based-localization-for-adas-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Oct 1, 2025
    Dataset authored and provided by
    Research Intelo
    License

    https://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy

    Time period covered
    2024 - 2033
    Area covered
    Global
    Description

    Map-Based Localization for ADAS Market Outlook



    According to our latest research, the Global Map-Based Localization for ADAS market size was valued at $1.67 billion in 2024 and is projected to reach $8.94 billion by 2033, expanding at a robust CAGR of 20.5% during 2024–2033. One of the primary factors fueling this remarkable growth is the accelerating integration of advanced driver-assistance systems (ADAS) in both passenger and commercial vehicles, driven by increasing safety regulations and consumer demand for enhanced driving experiences. The convergence of high-precision mapping technologies with real-time sensor data is enabling automakers and technology providers to deliver more reliable, context-aware automation features, thereby propelling the adoption of map-based localization solutions worldwide.



    Regional Outlook



    North America continues to dominate the Map-Based Localization for ADAS market, accounting for the largest share of global revenues in 2024. The region’s leadership stems from its mature automotive ecosystem, early adoption of autonomous vehicle technologies, and the presence of leading ADAS technology providers and OEMs. The United States, in particular, has witnessed widespread investments in R&D, government-led safety mandates, and strategic partnerships among automakers, mapping companies, and sensor manufacturers. With a market value exceeding $600 million in 2024, North America benefits from robust infrastructure, high consumer awareness regarding vehicle safety, and a favorable policy environment that encourages the deployment of next-generation ADAS features. As a result, the region is expected to maintain its leadership position throughout the forecast period, though its market share may gradually shift as other regions accelerate adoption.



    The Asia Pacific region is projected to be the fastest-growing market for map-based localization for ADAS, registering a CAGR exceeding 24% from 2024 to 2033. This rapid expansion is fueled by surging vehicle production, increasing urbanization, and government initiatives promoting road safety and smart mobility solutions in key countries such as China, Japan, and South Korea. Automotive OEMs in the region are aggressively integrating ADAS features into new vehicle models, while local technology firms and global players are investing heavily in mapping, sensor, and AI-driven localization technologies. The region’s vast population, rising disposable incomes, and growing awareness of traffic safety are further accelerating demand. Additionally, the expansion of 5G connectivity and smart city projects are laying the groundwork for widespread adoption of advanced localization systems, positioning Asia Pacific as a pivotal growth engine for the global market.



    Emerging economies in Latin America, the Middle East, and Africa are experiencing a gradual but steady uptake of map-based localization for ADAS solutions. While these regions currently account for a smaller share of the global market, their long-term prospects are promising due to increasing vehicle penetration rates and ongoing efforts to modernize transportation infrastructure. However, adoption is tempered by challenges such as limited high-definition mapping coverage, variable regulatory frameworks, and affordability concerns among end-users. Localized demand is also influenced by unique road conditions and the need for solutions tailored to regional driving environments. As governments and private sector stakeholders collaborate to address infrastructure and policy gaps, these regions are expected to emerge as important contributors to market growth in the latter half of the forecast period.



    Report Scope





    Attributes Details
    Report Title Map-Based Localization for ADAS Market Research Report 2033
    By Component Hardware, Software, Services
    By Technology LiDAR, Camera, Radar, GPS, Sensor Fusion, Others
    By Applicati

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Margaret Goldman; Joshua Rosera; Graham Lederer; Garth Graham; Asitang Mishra; Alice Yepremyan (2025). Map feature extraction challenge training and validation data [Dataset]. http://doi.org/10.5066/P9FXSPT1

Map feature extraction challenge training and validation data

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 25, 2025
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Authors
Margaret Goldman; Joshua Rosera; Graham Lederer; Garth Graham; Asitang Mishra; Alice Yepremyan
License

U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically

Time period covered
2022 - 2023
Description

Extracting useful and accurate information from scanned geologic and other earth science maps is a time-consuming and laborious process involving manual human effort. To address this limitation, the USGS partnered with the Defense Advanced Research Projects Agency (DARPA) to run the AI for Critical Mineral Assessment Competition, soliciting innovative solutions for automatically georeferencing and extracting features from maps. The competition opened for registration in August 2022 and concluded in December 2022. Training, validation, and evaluation data from the map feature extraction challenge are provided here, as well as competition details and a baseline solution. The data were derived from published sources and are provided to the public to support continued development of automated georeferencing and feature extraction tools. References for all maps are included with the data.

Search
Clear search
Close search
Google apps
Main menu