The National Hydrography Dataset Plus High Resolution (NHDplus High Resolution) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US Geological Survey, NHDPlus High Resolution provides mean annual flow and velocity estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses.For more information on the NHDPlus High Resolution dataset see the User’s Guide for the National Hydrography Dataset Plus (NHDPlus) High Resolution.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territoriesGeographic Extent: The Contiguous United States, Hawaii, portions of Alaska, Puerto Rico, Guam, US Virgin Islands, Northern Marianas Islands, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: USGSUpdate Frequency: AnnualPublication Date: July 2022This layer was symbolized in the ArcGIS Map Viewer and while the features will draw in the Classic Map Viewer the advanced symbology will not. Prior to publication, the network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original dataset. No data values -9999 and -9998 were converted to Null values.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute.Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map.Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class.Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
The National Hydrography Dataset Plus (NHDplus) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US EPA Office of Water and the US Geological Survey, the NHDPlus provides mean annual and monthly flow estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses. For more information on the NHDPlus dataset see the NHDPlus v2 User Guide.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territories not including Alaska.Geographic Extent: The United States not including Alaska, Puerto Rico, Guam, US Virgin Islands, Marshall Islands, Northern Marianas Islands, Palau, Federated States of Micronesia, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: EPA and USGSUpdate Frequency: There is new new data since this 2019 version, so no updates planned in the futurePublication Date: March 13, 2019Prior to publication, the NHDPlus network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the NHDPlus Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, On or Off Network (flowlines only), Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original NHDPlus dataset. No data values -9999 and -9998 were converted to Null values for many of the flowline fields.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but a vector tile layer created from the same data can be used at smaller scales to produce a webmap that displays across the full range of scales. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute. Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map. Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
DEEPEN stands for DE-risking Exploration of geothermal Plays in magmatic ENvironments. Part of the DEEPEN project involved developing and testing a methodology for a 3D play fairway analysis (PFA) for multiple play types (conventional hydrothermal, superhot EGS, and supercritical). This was tested using new and existing geoscientific exploration datasets at Newberry Volcano. This GDR submission includes images, data, and models related to the 3D favorability and uncertainty models and the 2D favorability and uncertainty maps. The DEEPEN PFA Methodology is based on the method proposed by Poux et al. (2020), which uses the Leapfrog Geothermal software with the Edge extension to conduct PFA in 3D. This method uses all available data to build a 3D geodata model which can be broken down into smaller blocks and analyzed with advanced geostatistical methods. Each data set is imported into a 3D model in Leapfrog and divided into smaller blocks. Conditional queries can then be used to assign each block an index value which conditionally ranks each block's favorability, from 0-5 with 5 being most favorable, for each model (e.g., lithologic, seismic, magnetic, structural). The values between 0-5 assigned to each block are referred to as index values. The final step of the process is to combine all the index models to create a favorability index. This involves multiplying each index model by a given weight and then summing the resulting values. The DEEPEN PFA Methodology follows this approach, but split up by the specific geologic components of each play type. These components are defined as follows for each magmatic play type: 1. Conventional hydrothermal plays in magmatic environments: Heat, fluid, and permeability 2. Superhot EGS plays: Heat, thermal insulation, and producibility (the ability to create and sustain fractures suitable for and EGS reservoir) 3. Supercritical plays: Heat, supercritical fluid, pressure seal, and producibility (the proper permeability and pressure conditions to allow production of supercritical fluid) More information on these components and their development can be found in Kolker et al., 2022. For the purposes of subsurface imaging, it is easier to detect a permeable fluid-filled reservoir than it is to detect separate fluid and permeability components. Therefore, in this analysis, we combine fluid and permeability for conventional hydrothermal plays, and supercritical fluid and producibility for supercritical plays. More information on this process is described in the following sections. We also project the 3D favorability volumes onto 2D surfaces for simplified joint interpretation, and we incorporate an uncertainty component. Uncertainty was modeled using the best approach for the dataset in question, for the datasets where we had enough information to do so. Identifying which subsurface parameters are the least resolved can help qualify current PFA results and focus future efforts in data collection. Where possible, the resulting uncertainty models/indices were weighted using the same weights applied to the respective datasets, and summed, following the PFA methodology above, but for uncertainty. There are two different versions of the Leapfrog model and associated favorability models: - v1.0: The first release in June 2023 - v2.1: The second release, with improvements made to the earthquake catalog (included additional identified events, removed duplicate events), to the temperature model (fixed a deep BHT), and to the index models (updated the seismicity-heat source index models for supercritical and EGS, and the resistivity-insulation index models for all three play types). Also uses the jet color map rather than the magma color map for improved interpretability. - v2.1.1: Updated to include v2.0 uncertainty results (see below for uncertainty model versions) There are two different versions of the associated uncertainty models: - v1.0: The first release in June 2023 - v2.0: The second release, with improvements made to the temperature and fault uncertainty models. ** Note that this submission is deprecated and that a newer submission, linked below and titled "DEEPEN Final 3D PFA Favorability Models and 2D Favorability Maps at Newberry Volcano" contains the final versions of these resources. **
DEEPEN stands for DE-risking Exploration of geothermal Plays in magmatic ENvironments. Part of the DEEPEN project involved developing and testing a methodology for a 3D play fairway analysis (PFA) for multiple play types (conventional hydrothermal, superhot EGS, and supercritical). This was tested using new and existing geoscientific exploration datasets at Newberry Volcano. This GDR submission includes images, data, and models related to the 3D favorability and uncertainty models and the 2D favorability and uncertainty maps. The DEEPEN PFA Methodology is based on the method proposed by Poux et al. (2020), which uses the Leapfrog Geothermal software with the Edge extension to conduct PFA in 3D. This method uses all available data to build a 3D geodata model which can be broken down into smaller blocks and analyzed with advanced geostatistical methods. Each data set is imported into a 3D model in Leapfrog and divided into smaller blocks. Conditional queries can then be used to assign each block an index value which conditionally ranks each block's favorability, from 0-5 with 5 being most favorable, for each model (e.g., lithologic, seismic, magnetic, structural). The values between 0-5 assigned to each block are referred to as index values. The final step of the process is to combine all the index models to create a favorability index. This involves multiplying each index model by a given weight and then summing the resulting values. The DEEPEN PFA Methodology follows this approach, but split up by the specific geologic components of each play type. These components are defined as follows for each magmatic play type: 1. Conventional hydrothermal plays in magmatic environments: Heat, fluid, and permeability 2. Superhot EGS plays: Heat, thermal insulation, and producibility (the ability to create and sustain fractures suitable for and EGS reservoir) 3. Supercritical plays: Heat, supercritical fluid, pressure seal, and producibility (the proper permeability and pressure conditions to allow production of supercritical fluid) More information on these components and their development can be found in Kolker et al., 2022. For the purposes of subsurface imaging, it is easier to detect a permeable fluid-filled reservoir than it is to detect separate fluid and permeability components. Therefore, in this analysis, we combine fluid and permeability for conventional hydrothermal plays, and supercritical fluid and producibility for supercritical plays. More information on this process is described in the following sections. We also project the 3D favorability volumes onto 2D surfaces for simplified joint interpretation, and we incorporate an uncertainty component. Uncertainty was modeled using the best approach for the dataset in question, for the datasets where we had enough information to do so. Identifying which subsurface parameters are the least resolved can help qualify current PFA results and focus future efforts in data collection. Where possible, the resulting uncertainty models/indices were weighted using the same weights applied to the respective datasets, and summed, following the PFA methodology above, but for uncertainty. There are two different versions of the Leapfrog model and associated favorability models: - v1.0: The first release in June 2023 - v2.1: The second release, with improvements made to the earthquake catalog (included additional identified events, removed duplicate events), to the temperature model (fixed a deep BHT), and to the index models (updated the seismicity-heat source index models for supercritical and EGS, and the resistivity-insulation index models for all three play types). Also uses the jet color map rather than the magma color map for improved interpretability. - v2.1.1: Updated to include v2.0 uncertainty results (see below for uncertainty model versions) There are two different versions of the associated uncertainty models: - v1.0: The first release in June 2023 - v2.0: The second release, with improvements made to the temperature and fault uncertainty models. ** Note that this submission is deprecated and that a newer submission, linked below and titled "DEEPEN Final 3D PFA Favorability Models and 2D Favorability Maps at Newberry Volcano" contains the final versions of these resources. **
Important Note: This item is in mature support as of September 2023 and will be retired in December 2025. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version.The USGS Protected Areas Database of the United States (PAD-US) is the official inventory of public parks and other protected open space. The spatial data in PAD-US represents public lands held in trust by thousands of national, state and regional/local governments, as well as non-profit conservation organizations.GAP 1 and 2 areas are primarily managed for biodiversity, GAP 3 are managed for multiple uses including conservation and extraction, GAP 4 no known mandate for biodiversity protection. Provides a general overview of protection status including management designations. PAD-US is published by the U.S. Geological Survey (USGS) Science Analytics and Synthesis (SAS), Gap Analysis Project (GAP). GAP produces data and tools that help meet critical national challenges such as biodiversity conservation, recreation, public health, climate change adaptation, and infrastructure investment. See the GAP webpage for more information about GAP and other GAP data including species and land cover.The USGS Protected Areas Database of the United States (PAD-US) classifies lands into four GAP Status classes:GAP Status 1 - Areas managed for biodiversity where natural disturbances are allowed to proceedGAP Status 2 - Areas managed for biodiversity where natural disturbance is suppressedGAP Status 3 - Areas protected from land cover conversion but subject to extractive uses such as logging and miningGAP Status 4 - Areas with no known mandate for protectionIn the United States, areas that are protected from development and managed for biodiversity conservation include Wilderness Areas, National Parks, National Wildlife Refuges, and Wild & Scenic Rivers. Understanding the geographic distribution of these protected areas and their level of protection is an important part of landscape-scale planning. Dataset SummaryPhenomenon Mapped: Areas protected from development and managed to maintain biodiversity Coordinate System: Web Mercator Auxiliary SphereExtent: 50 United States plus Puerto Rico, the US Virgin Islands, the Northern Mariana Islands and other Pacific Ocean IslandsVisible Scale: 1:1,000,000 and largerSource: USGS Science Analytics and Synthesis (SAS), Gap Analysis Project (GAP) PAD-US version 3.0Publication Date: July 2022Attributes included in this layer are: CategoryOwner TypeOwner NameLocal OwnerManager TypeManager NameLocal ManagerDesignation TypeLocal DesignationUnit NameLocal NameSourcePublic AccessGAP Status - Status 1, 2, or 3GAP Status DescriptionInternational Union for Conservation of Nature (IUCN) Description - I: Strict Nature Reserve, II: National Park, III: Natural Monument or Feature, IV: Habitat/Species Management Area, V: Protected Landscape/Seascape, VI: Protected area with sustainable use of natural resources, Other conservation area, UnassignedDate of EstablishmentThe source data for this layer are available here. What can you do with this Feature Layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but a vector tile layer created from the same data can be used at smaller scales to produce a webmap that displays across the full range of scales. The layer or a map containing it can be used in an application.Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections and apply filters. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Change the layer’s style and filter the data. For example, you could set a filter for Gap Status Code = 3 to create a map of only the GAP Status 3 areas.Add labels and set their propertiesCustomize the pop-upArcGIS ProAdd this layer to a 2d or 3d map. The same scale limit as Online applies in ProUse as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Note that many features in the PAD-US database overlap. For example wilderness area designations overlap US Forest Service and other federal lands. Any analysis should take this into consideration. An imagery layer created from the same data set can be used for geoprocessing analysis with larger extents and eliminates some of the complications arising from overlapping polygons.Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.
Part of the DEEPEN (DE-risking Exploration of geothermal Plays in magmatic ENvironments) project involved developing and testing a methodology for a 3D play fairway analysis (PFA) for multiple play types (conventional hydrothermal, superhot EGS, and supercritical). This was tested using new and existing geoscientific exploration datasets at Newberry Volcano. This GDR submission includes images, data, and models related to the 3D favorability and uncertainty models and the 2D favorability and uncertainty maps. The DEEPEN PFA Methodology, detailed in the journal article below, is based on the method proposed by Poux & O'brien (2020), which uses the Leapfrog Geothermal software with the Edge extension to conduct PFA in 3D. This method uses all available data to build a 3D geodata model which can be broken down into smaller blocks and analyzed with advanced geostatistical methods. Each data set is imported into a 3D model in Leapfrog and divided into smaller blocks. Conditional queries can then be used to assign each block an index value which conditionally ranks each block's favorability, from 0-5 with 5 being most favorable, for each model (e.g., lithologic, seismic, magnetic, structural). The values between 0-5 assigned to each block are referred to as index values. The final step of the process is to combine all the index models to create a favorability index. This involves multiplying each index model by a given weight and then summing the resulting values. The DEEPEN PFA Methodology follows this approach, but split up by the specific geologic components of each play type. These components are defined as follows for each magmatic play type: 1. Conventional hydrothermal plays in magmatic environments: Heat, fluid, and permeability 2. Superhot EGS plays: Heat, thermal insulation, and producibility (the ability to create and sustain fractures suitable for and EGS reservoir) 3. Supercritical plays: Heat, supercritical fluid, pressure seal, and producibility (the proper permeability and pressure conditions to allow production of supercritical fluid) More information on these components and their development can be found in Kolker et al., (2022). For the purposes of subsurface imaging, it is easier to detect a permeable fluid-filled reservoir than it is to detect separate fluid and permeability components. Therefore, in this analysis, we combine fluid and permeability for conventional hydrothermal plays, and supercritical fluid and producibility for supercritical plays. We also project the 3D favorability volumes onto 2D surfaces for simplified joint interpretation, and we incorporate an uncertainty component. Uncertainty was modeled using the best approach for the dataset in question, for the datasets where we had enough information to do so. Identifying which subsurface parameters are the least resolved can help qualify current PFA results and focus future efforts in data collection. Where possible, the resulting uncertainty models/indices were weighted using the same weights applied to the respective datasets, and summed, following the PFA methodology above, but for uncertainty.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The USGS National Hydrography Dataset (NHD) downloadable data collection from The National Map (TNM) is a comprehensive set of digital spatial data that encodes information about naturally occurring and constructed bodies of surface water (lakes, ponds, and reservoirs), paths through which water flows (canals, ditches, streams, and rivers), and related entities such as point features (springs, wells, stream gages, and dams). The information encoded about these features includes classification and other characteristics, delineation, geographic name, position and related measures, a "reach code" through which other information can be related to the NHD, and the direction of water flow. The network of reach codes delineating water and transported material flow allows users to trace movement in upstream and downstream directions. In addition to this geographic information, the dataset contains metadata that supports the exchange of future updates and improvements to the data. The NHD supports many applications, such as making maps, geocoding observations, flow modeling, data maintenance, and stewardship. For additional information on NHD, go to https://www.usgs.gov/core-science-systems/ngp/national-hydrography.
DWR was the steward for NHD and Watershed Boundary Dataset (WBD) in California. We worked with other organizations to edit and improve NHD and WBD, using the business rules for California. California's NHD improvements were sent to USGS for incorporation into the national database. The most up-to-date products are accessible from the USGS website. Please note that the California portion of the National Hydrography Dataset is appropriate for use at the 1:24,000 scale.
For additional derivative products and resources, including the major features in geopackage format, please go to this page: https://data.cnra.ca.gov/dataset/nhd-major-features Archives of previous statewide extracts of the NHD going back to 2018 may be found at https://data.cnra.ca.gov/dataset/nhd-archive.
In September 2022, USGS officially notified DWR that the NHD would become static as USGS resources will be devoted to the transition to the new 3D Hydrography Program (3DHP). 3DHP will consist of LiDAR-derived hydrography at a higher resolution than NHD. Upon completion, 3DHP data will be easier to maintain, based on a modern data model and architecture, and better meet the requirements of users that were documented in the Hydrography Requirements and Benefits Study (2016). The initial releases of 3DHP include NHD data cross-walked into the 3DHP data model. It will take several years for the 3DHP to be built out for California. Please refer to the resources on this page for more information.
The FINAL,STATIC version of the National Hydrography Dataset for California was published for download by USGS on December 27, 2023. This dataset can no longer be edited by the state stewards. The next generation of national hydrography data is the USGS 3D Hydrography Program (3DHP).
Questions about the California stewardship of these datasets may be directed to nhd_stewardship@water.ca.gov.
Wetlands are areas where water is present at or near the surface of the soil during at least part of the year. Wetlands provide habitat for many species of plants and animals that are adapted to living in wet habitats. Wetlands form characteristic soils, absorb pollutants and excess nutrients from aquatic systems, help buffer the effects of high flows, and recharge groundwater. Data on the distribution and type of wetland play an important role in land use planning and several federal and state laws require that wetlands be considered during the planning process.The National Wetlands Inventory (NWI) was designed to assist land managers in wetland conservation efforts. The NWI is managed by the US Fish and Wildlife Service.Dataset SummaryPhenomenon Mapped: WetlandsGeographic Extent: 50 United States plus Puerto Rico, the US Virgin Islands, Guam, American Samoa, and the Northern Mariana IslandsProjection: Web Mercator Auxiliary SphereVisible Scale: This layer preforms well between scales of 1:1,000,000 to 1:1,000. An imagery layer created from this dataset is also available which you can also use to quickly draw wetlands at smaller scales.Source: U.S. Fish and Wildlife ServiceUpdate Frequency: AnnualPublication Date: October 26, 2024This layer was created from the October 26, 2024 version of the NWI. The features were converted from multi-part to a single part using the Multipart To Singlepart tool. Features with more than 50,000 vertices were split with the Dice tool. The Repair Geometry tool was run on the features, using the OGC option.The layer is published with a related table that contains text fields created by Esri for use in the layer's pop-up. Fields in the table are:Popup Header - this field contains a text string that is used to create the header in the default pop-up System Text - this field contains a text string that is used to create the system description text in the default pop-upClass Text - this field contains a text string that is used to create the class description text in the default pop-upModifier Text - this field contains a text string that is used to create the modifier description text in the default pop-upSpecies Text - this field contains a text string that is used to create the species description text in the default pop-upCodes, names, and text fields were derived from the publication Classification of Wetlands and Deepwater Habitats of the United States.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but an imagery layer created from the same data can be used at smaller scales to produce a webmap that displays across the full scale range. The layer or a map containing it can be used in an application.Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections and apply filters. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Change the layer’s style and filter the data. For example, you could set a filter for System Name = 'Palustrine' to create a map of palustrine wetlands only.Add labels and set their propertiesCustomize the pop-upArcGIS ProAdd this layer to a 2d or 3d mapUse as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains data on five text analysis types (term extraction, contract analysis, topic modeling, network mapping), based on the survey data where researchers selected research output that are related to the 17 Sustainable Development Goals (SDGs). This is used as input to improve the current SDG classification model v4.0 to v5.0
Sustainable Development Goals are the 17 global challenges set by the United Nations. Within each of the goals specific targets and indicators are mentioned to monitor the progress of reaching those goals by 2030. In an effort to capture how research is contributing to move the needle on those challenges, we earlier have made an initial classification model than enables to quickly identify what research output is related to what SDG. (This Aurora SDG dashboard is the initial outcome as proof of practice.)
The initiative started from the Aurora Universities Network in 2017, in the working group "Societal Impact and Relevance of Research", to investigate and to make visible 1. what research is done that are relevant to topics or challenges that live in society (for the proof of practice this has been scoped down to the SDGs), and 2. what the effect or impact is of implementing those research outcomes to those societal challenges (this also have been scoped down to research output being cited in policy documents from national and local governments an NGO's).
Context of this dataset | classification model improvement workflow
The classification model we have used are 17 different search queries on the Scopus database.
SDG search queries version 4.0 (SQv4) have been created, Published here:
Search Queries for "Mapping Research Output to the Sustainable Development Goals (SDGs)" v4.0 by Aurora Universities Network (AUR) doi:10.5281/zenodo.3817443
A survey has been distributed to senior researchers to test the robustness of SQv4. Published here:
Survey data of "Mapping Research output to the Sustainable Development Goals SDGs" by Aurora Universities Network (AUR) doi:10.5281/zenodo.3798385
This text analysis has been made as one of the inputs to improve the classification model. Published here:
Text Analyses of Survey Data on "Mapping Research Output to the Sustainable Development Goals SDGs" by Aurora Universities Network (AUR) doi:10.5281/zenodo.3832090
Improved SDG search queries version 5.0 (SQv5) have been created, Published here:
Search Queries for "Mapping Research Output to the Sustainable Development Goals (SDGs)" v5.0 by Aurora Universities Network (AUR) doi:10.5281/zenodo.3817445
Methods used to do the text analysis
Term Extraction: after text normalisation (stemming, etc) we extracted 2 terms in bigrams and trigrams that co-occurred the most per document, in the title, abstract and keyword
Contrast analysis: the co-occurring terms in publications (title, abstract, keywords), of the papers that respondents have indicated relate to this SDG (y-axis: True), and that have been rejected (x-axis: False). In the top left you'll see term co-occurrences that a clearly relate to this SDG. The bottom-right are terms that are appear in papers that have been rejected for this SDG. The top-right terms appear frequently in both and cannot be used to discriminate between the two groups.
Network map: This diagram shows the cluster-network of terms co-occurring in the publications related to this SDG, selected by the respondents (accepted publications only).
Topic model: This diagram shows the topics, and the related terms that make up that topic. The number of topics is related to the number of of targets of this SDG.
Contingency matrix: This diagram shows the top 10 of co-occurring terms that correlate the most.
Software used to do the text analyses
CorTexT: The CorTexT Platform is the digital platform of LISIS Unit and a project launched and sustained by IFRIS and INRAE. This platform aims at empowering open research and studies in humanities about the dynamic of science, technology, innovation and knowledge production.
Resource with interactive visualisations
Based on the text analysis data we have created a website that puts all the SDG interactive diagrams together. For you to scrall through. https://sites.google.com/vu.nl/sdg-survey-analysis-results/
Data set content
In the dataset root you'll find the following folders and files:
/sdg01-17/
This contains the text analysis for all the individual SDG surveys.
/methods/
This contains the step-by-step explanations of the text analysis methods using Cortext.
/images/
images of the results used in this README.md.
LICENSE.md
terms and conditions for reusing this data.
README.md
description of the dataset; each subfolders contains a README.md file to futher describe the content of each sub-folder.
Inside an /sdg01-17/-folder you'll find the following:
This contains the step-by-step explanations of the text analysis methods using Cortext.
/sdg01-17/sdg04-sdg-survey-selected-publications-combined.db
his contains the title, abstract, keywords, fo the publications in the survey, including the and accept or rejection status and the number of respondents
/sdg01-17/sdg04-sdg-survey-selected-publications-combined-accepted-accepted-custom-filtered.db
same as above, but only the accepted papers
/sdg01-17/extracted-terms-list-top1000.csv
the aggregated list of co-occuring terms (bigrams and trigrams) extracted per paper.
/sdg01-17/contrast-analysis/
This contains the data and visualisation of the terms appearing in papers that have been accepted (true) and rejected (false) to be relating to this SDG.
/sdg01-17/topic-modelling/
This contains the data and visualisation of the terms clustered in the same number of topics as there are 'targets' within that SDG.
/sdg01-17/network-mapping/
This contains the data and visualisation of the terms clustered in co-occuring proximation of appearance in papers
/sdg01-17/contingency-matrix/
This contains the data and visualisation of the top 10 terms co-occuring
note: the .csv files are actually tab-separated.
Contribute and improve the SDG Search Queries
We welcome you to join the Github community and to fork, branch, improve and make a pull request to add your improvements to the new version of the SDG queries. https://github.com/Aurora-Network-Global/sdg-queries
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The French National Mapping Agency (Institut National de l'Information Géographique et Forestière - IGN) is responsible for producing and maintaining the spatial data sets for all of France. At the same time, they must satisfy the needs of different stakeholders who are responsible for decisions at multiple levels from local to national. IGN produces many different maps including detailed road networks and land cover/land use maps over time. The information contained in these maps is crucial for many of the decisions made about urban planning, resource management and landscape restoration as well as other environmental issues in France. Recently, IGN has started the process of creating a high-resolution land use land cover (LULC) maps, aimed at developing smart and accurate monitoring services of LULC over time. To help update and validate the French LULC database, citizens and interested stakeholders can contribute using the Paysages mobile and web applications. This approach presents an opportunity to evaluate the integration of citizens in the IGN process of updating and validating LULC data.
Dataset 1: Change detection validation 2019
This dataset contains web-based validations of changes detected by time series (2016 – 2019) analysis of Sentinel-2 satellite imagery. Validation was conducted using two high resolution orthophotos from respectively 2016 and 2019 as reference data. Two tools have been used: Paysages web application and LACO-Wiki. Both tools used the same validation design: blind validation and the same options. For each detected change, contributors are asked to validate if there is a change and if it is the case then to choose a LU or LC class from a pre-defined list of classes.
The dataset has the following characteristics:
Associated files: 1- Change validation locations.png, 1-Change validation 2019 – Attributes.csv, 1-Change validation 2019.csv, 1-Change validation 2019.geoJSON
This dataset is licensed under a Creative Commons Attribution 4.0 International. It is attributed to the LandSense Citizen Observatory, IGN-France, and GeoVille.
Dataset 2: Land use classification 2019
The aim of this data collection campaign was to improve the LU classification of authoritative LULC data (OCS-GE 2016 ©IGN) for built-up area. Using the Paysages web platform, contributors are asked to choose a land use value among a list of pre-defined values for each location.
The dataset has the following characteristics:
Associated files: 2- LU classification points.png, 2-LU classification 2019 – Attributes.csv, 2-LU classification 2019.csv, 2-LU classification 2019.geoJSON
This dataset is licensed under a Creative Commons Attribution 4.0 International. It is attributed to the LandSense Citizen Observatory, IGN-France and the International Institute for Applied Systems Analysis.
Dataset 3: In-situ validation 2018
The aim of this data collection campaign was to collect in-situ (ground-based) information, using the Paysages mobile application, to update authoritative LULC data. Contributors visit pre-determined locations, take photographs, of the point location and in the four cardinal directions away from the point and answer a few questions with respect with the task. Two tasks were defined:
The dataset has the following characteristics
Associated files: 3- Insitu locations.png, 3- Insitu validation 2018 – Attributes.csv, 3- Insitu validation 2018.csv, 3- Insitu validation 2018.geoJSON
This dataset is licensed under a Creative Commons Attribution 4.0 International. It is attributed to the LandSense Citizen Observatory, IGN-France.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no 689812.
THIS RESOURCE IS NO LONGER IN SERVICE, documented September 6, 2016. The Allen Institute Neurowiki is a joint project between Vulcan Inc. and the Allen Institute to build a Semantic Wiki mapping genetic instances. It is a finished prototype testing the import pipelines and display componenets for combining 5 major RDF datasets from 4 different sources. Current planning includes mapping complete datasets, curating a better ontology, and creating multiple ontology management for a user class. Biological Linked Data Map: * Open, public online access * Data from multiple RDF data stores * Complete import pipeline using LDIF framework * Outlines of each imported instance embedding inline wiki properties and providing views of imported properties from original RDF datasets * Charting tools that ''''pivot'''' SPARQL queries providing several views of each query * Navigation and composition tools for accessing and mining the data Where did we get the data? * KEGG: Kyoto Encyclopedia of Genes and Genomes: KEGG GENES is a collection of gene catalogs for all complete genomes generated from publicly available resources, mostly NCBI RefSeq * Diseasome: The Diseasome website is a disease / disorder relationships explorer and a sample of an innovative map-oriented scientific work. Built by a team of researchers and engineers, it uses the Human Disease Network dataset. * DrugBank: The DrugBank database is a unique bioinformatics and cheminformatics resource that combines detailed drug data with comprehensive drug target information. * Sider: Sider contains information on marketed medicines and their recorded adverse drug reactions. The information is extracted from public documents and package inserts. Every piece of content on every instance page is generated by Semantic Result Formatters interpreting SPARQL results.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
WARNING: This is a pre-release dataset and its fields names and data structures are subject to change. It should be considered pre-release until the end of March 2025. The schema changed in February 2025 - please see below. We will post a roadmap of upcoming changes, but service URLs and schema are now stable. For deployment status of new services in February 2025, see https://gis.data.ca.gov/pages/city-and-county-boundary-data-status. Additional roadmap and status links at the bottom of this metadata.This dataset is continuously updated as the source data from CDTFA is updated, as often as many times a month. If you require unchanging point-in-time data, export a copy for your own use rather than using the service directly in your applications.PurposeCounty boundaries along with third party identifiers used to join in external data. Boundaries are from the California Department of Tax and Fee Administration (CDTFA). These boundaries are the best available statewide data source in that CDTFA receives changes in incorporation and boundary lines from the Board of Equalization, who receives them from local jurisdictions for tax purposes. Boundary accuracy is not guaranteed, and though CDTFA works to align boundaries based on historical records and local changes, errors will exist. If you require a legal assessment of boundary location, contact a licensed surveyor.This dataset joins in multiple attributes and identifiers from the US Census Bureau and Board on Geographic Names to facilitate adding additional third party data sources. In addition, we attach attributes of our own to ease and reduce common processing needs and questions. Finally, coastal buffers are separated into separate polygons, leaving the land-based portions of jurisdictions and coastal buffers in adjacent polygons. This feature layer is for public use.Related LayersThis dataset is part of a grouping of many datasets:Cities: Only the city boundaries and attributes, without any unincorporated areasWith Coastal BuffersWithout Coastal BuffersCounties: Full county boundaries and attributes, including all cities within as a single polygonWith Coastal Buffers (this dataset)Without Coastal BuffersCities and Full Counties: A merge of the other two layers, so polygons overlap within city boundaries. Some customers require this behavior, so we provide it as a separate service.With Coastal BuffersWithout Coastal BuffersCity and County AbbreviationsUnincorporated Areas (Coming Soon)Census Designated PlacesCartographic CoastlinePolygonLine source (Coming Soon)Working with Coastal BuffersThe dataset you are currently viewing includes the coastal buffers for cities and counties that have them in the source data from CDTFA. In the versions where they are included, they remain as a second polygon on cities or counties that have them, with all the same identifiers, and a value in the COASTAL field indicating if it"s an ocean or a bay buffer. If you wish to have a single polygon per jurisdiction that includes the coastal buffers, you can run a Dissolve on the version that has the coastal buffers on all the fields except OFFSHORE and AREA_SQMI to get a version with the correct identifiers.Point of ContactCalifornia Department of Technology, Office of Digital Services, odsdataservices@state.ca.govField and Abbreviation DefinitionsCDTFA_COUNTY: CDTFA county name. For counties, this will be the name of the polygon itself. For cities, it is the name of the county the city polygon is within.CDTFA_COPRI: county number followed by the 3-digit city primary number used in the Board of Equalization"s 6-digit tax rate area numbering system. The boundary data originate with CDTFA's teams managing tax rate information, so this field is preserved and flows into this dataset.CENSUS_GEOID: numeric geographic identifiers from the US Census BureauCENSUS_PLACE_TYPE: City, County, or Town, stripped off the census name for identification purpose.GNIS_PLACE_NAME: Board on Geographic Names authorized nomenclature for area names published in the Geographic Name Information SystemGNIS_ID: The numeric identifier from the Board on Geographic Names that can be used to join these boundaries to other datasets utilizing this identifier.CDT_COUNTY_ABBR: Abbreviations of county names - originally derived from CalTrans Division of Local Assistance and now managed by CDT. Abbreviations are 3 characters.CDT_NAME_SHORT: The name of the jurisdiction (city or county) with the word "City" or "County" stripped off the end. Some changes may come to how we process this value to make it more consistent.AREA_SQMI: The area of the administrative unit (city or county) in square miles, calculated in EPSG 3310 California Teale Albers.OFFSHORE: Indicates if the polygon is a coastal buffer. Null for land polygons. Additional values include "ocean" and "bay".PRIMARY_DOMAIN: Currently empty/null for all records. Placeholder field for official URL of the city or countyCENSUS_POPULATION: Currently null for all records. In the future, it will include the most recent US Census population estimate for the jurisdiction.GlobalID: While all of the layers we provide in this dataset include a GlobalID field with unique values, we do not recommend you make any use of it. The GlobalID field exists to support offline sync, but is not persistent, so data keyed to it will be orphaned at our next update. Use one of the other persistent identifiers, such as GNIS_ID or GEOID instead.Boundary AccuracyCounty boundaries were originally derived from a 1:24,000 accuracy dataset, with improvements made in some places to boundary alignments based on research into historical records and boundary changes as CDTFA learns of them. City boundary data are derived from pre-GIS tax maps, digitized at BOE and CDTFA, with adjustments made directly in GIS for new annexations, detachments, and corrections. Boundary accuracy within the dataset varies. While CDTFA strives to correctly include or exclude parcels from jurisdictions for accurate tax assessment, this dataset does not guarantee that a parcel is placed in the correct jurisdiction. When a parcel is in the correct jurisdiction, this dataset cannot guarantee accurate placement of boundary lines within or between parcels or rights of way. This dataset also provides no information on parcel boundaries. For exact jurisdictional or parcel boundary locations, please consult the county assessor's office and a licensed surveyor.CDTFA's data is used as the best available source because BOE and CDTFA receive information about changes in jurisdictions which otherwise need to be collected independently by an agency or company to compile into usable map boundaries. CDTFA maintains the best available statewide boundary information.CDTFA's source data notes the following about accuracy:City boundary changes and county boundary line adjustments filed with the Board of Equalization per Government Code 54900. This GIS layer contains the boundaries of the unincorporated county and incorporated cities within the state of California. The initial dataset was created in March of 2015 and was based on the State Board of Equalization tax rate area boundaries. As of April 1, 2024, the maintenance of this dataset is provided by the California Department of Tax and Fee Administration for the purpose of determining sales and use tax rates. The boundaries are continuously being revised to align with aerial imagery when areas of conflict are discovered between the original boundary provided by the California State Board of Equalization and the boundary made publicly available by local, state, and federal government. Some differences may occur between actual recorded boundaries and the boundaries used for sales and use tax purposes. The boundaries in this map are representations of taxing jurisdictions for the purpose of determining sales and use tax rates and should not be used to determine precise city or county boundary line locations. Boundary ProcessingThese data make a structural change from the source data. While the full boundaries provided by CDTFA include coastal buffers of varying sizes, many users need boundaries to end at the shoreline of the ocean or a bay. As a result, after examining existing city and county boundary layers, these datasets provide a coastline cut generally along the ocean facing coastline. For county boundaries in northern California, the cut runs near the Golden Gate Bridge, while for cities, we cut along the bay shoreline and into the edge of the Delta at the boundaries of Solano, Contra Costa, and Sacramento counties.In the services linked above, the versions that include the coastal buffers contain them as a second (or third) polygon for the city or county, with the value in the COASTAL field set to whether it"s a bay or ocean polygon. These can be processed back into a single polygon by dissolving on all the fields you wish to keep, since the attributes, other than the COASTAL field and geometry attributes (like areas) remain the same between the polygons for this purpose.SliversIn cases where a city or county"s boundary ends near a coastline, our coastline data may cross back and forth many times while roughly paralleling the jurisdiction"s boundary, resulting in many polygon slivers. We post-process the data to remove these slivers using a city/county boundary priority algorithm. That is, when the data run parallel to each other, we discard the coastline cut and keep the CDTFA-provided boundary, even if it extends into the ocean a small amount. This processing supports consistent boundaries for Fort Bragg, Point Arena, San Francisco, Pacifica, Half Moon Bay, and Capitola, in addition to others. More information on this algorithm will be provided soon.Coastline CaveatsSome cities have buffers extending into water bodies that we do not cut at the shoreline. These include
Overview
This dataset of medical misinformation was collected and is published by Kempelen Institute of Intelligent Technologies (KInIT). It consists of approx. 317k news articles and blog posts on medical topics published between January 1, 1998 and February 1, 2022 from a total of 207 reliable and unreliable sources. The dataset contains full-texts of the articles, their original source URL and other extracted metadata. If a source has a credibility score available (e.g., from Media Bias/Fact Check), it is also included in the form of annotation. Besides the articles, the dataset contains around 3.5k fact-checks and extracted verified medical claims with their unified veracity ratings published by fact-checking organisations such as Snopes or FullFact. Lastly and most importantly, the dataset contains 573 manually and more than 51k automatically labelled mappings between previously verified claims and the articles; mappings consist of two values: claim presence (i.e., whether a claim is contained in the given article) and article stance (i.e., whether the given article supports or rejects the claim or provides both sides of the argument).
The dataset is primarily intended to be used as a training and evaluation set for machine learning methods for claim presence detection and article stance classification, but it enables a range of other misinformation related tasks, such as misinformation characterisation or analyses of misinformation spreading.
Its novelty and our main contributions lie in (1) focus on medical news article and blog posts as opposed to social media posts or political discussions; (2) providing multiple modalities (beside full-texts of the articles, there are also images and videos), thus enabling research of multimodal approaches; (3) mapping of the articles to the fact-checked claims (with manual as well as predicted labels); (4) providing source credibility labels for 95% of all articles and other potential sources of weak labels that can be mined from the articles' content and metadata.
The dataset is associated with the research paper "Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims" accepted and presented at ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '22).
The accompanying Github repository provides a small static sample of the dataset and the dataset's descriptive analysis in a form of Jupyter notebooks.
Options to access the dataset
There are two ways how to get access to the dataset:
1. Static dump of the dataset available in the CSV format
2. Continuously updated dataset available via REST API
In order to obtain an access to the dataset (either to full static dump or REST API), please, request the access by following instructions provided below.
References
If you use this dataset in any publication, project, tool or in any other form, please, cite the following papers:
@inproceedings{SrbaMonantPlatform,
author = {Srba, Ivan and Moro, Robert and Simko, Jakub and Sevcech, Jakub and Chuda, Daniela and Navrat, Pavol and Bielikova, Maria},
booktitle = {Proceedings of Workshop on Reducing Online Misinformation Exposure (ROME 2019)},
pages = {1--7},
title = {Monant: Universal and Extensible Platform for Monitoring, Detection and Mitigation of Antisocial Behavior},
year = {2019}
}
@inproceedings{SrbaMonantMedicalDataset,
author = {Srba, Ivan and Pecher, Branislav and Tomlein Matus and Moro, Robert and Stefancova, Elena and Simko, Jakub and Bielikova, Maria},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '22)},
numpages = {11},
title = {Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims},
year = {2022},
doi = {10.1145/3477495.3531726},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3477495.3531726},
}
Dataset creation process
In order to create this dataset (and to continuously obtain new data), we used our research platform Monant. The Monant platform provides so called data providers to extract news articles/blogs from news/blog sites as well as fact-checking articles from fact-checking sites. General parsers (from RSS feeds, Wordpress sites, Google Fact Check Tool, etc.) as well as custom crawler and parsers were implemented (e.g., for fact checking site Snopes.com). All data is stored in the unified format in a central data storage.
Ethical considerations
The dataset was collected and is published for research purposes only. We collected only publicly available content of news/blog articles. The dataset contains identities of authors of the articles if they were stated in the original source; we left this information, since the presence of an author's name can be a strong credibility indicator. However, we anonymised the identities of the authors of discussion posts included in the dataset.
The main identified ethical issue related to the presented dataset lies in the risk of mislabelling of an article as supporting a false fact-checked claim and, to a lesser extent, in mislabelling an article as not containing a false claim or not supporting it when it actually does. To minimise these risks, we developed a labelling methodology and require an agreement of at least two independent annotators to assign a claim presence or article stance label to an article. It is also worth noting that we do not label an article as a whole as false or true. Nevertheless, we provide partial article-claim pair veracities based on the combination of claim presence and article stance labels.
As to the veracity labels of the fact-checked claims and the credibility (reliability) labels of the articles' sources, we take these from the fact-checking sites and external listings such as Media Bias/Fact Check as they are and refer to their methodologies for more details on how they were established.
Lastly, the dataset also contains automatically predicted labels of claim presence and article stance using our baselines described in the next section. These methods have their limitations and work with certain accuracy as reported in this paper. This should be taken into account when interpreting them.
Reporting mistakes in the dataset
The mean to report considerable mistakes in raw collected data or in manual annotations is by creating a new issue in the accompanying Github repository. Alternately, general enquiries or requests can be sent at info [at] kinit.sk.
Dataset structure
Raw data
At first, the dataset contains so called raw data (i.e., data extracted by the Web monitoring module of Monant platform and stored in exactly the same form as they appear at the original websites). Raw data consist of articles from news sites and blogs (e.g. naturalnews.com), discussions attached to such articles, fact-checking articles from fact-checking portals (e.g. snopes.com). In addition, the dataset contains feedback (number of likes, shares, comments) provided by user on social network Facebook which is regularly extracted for all news/blogs articles.
Raw data are contained in these CSV files (and corresponding REST API endpoints):
Note: Personal information about discussion posts' authors (name, website, gravatar) are anonymised.
Annotations
Secondly, the dataset contains so called annotations. Entity annotations describe the individual raw data entities (e.g., article, source). Relation annotations describe relation between two of such entities.
Each annotation is described by the following attributes:
At the same time, annotations are associated with a particular object identified by:
entity_type
in case of entity annotations, or source_entity_type
and target_entity_type
in case of relation annotations). Possible values: sources, articles, fact-checking-articles.entity_id
in case of entity annotations, or source_entity_id
and target_entity_id
in case of relation
ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
A. SUMMARY This dataset includes COVID-19 tests by resident neighborhood and specimen collection date (the day the test was collected). Specifically, this dataset includes tests of San Francisco residents who listed a San Francisco home address at the time of testing. These resident addresses were then geo-located and mapped to neighborhoods. The resident address associated with each test is hand-entered and susceptible to errors, therefore neighborhood data should be interpreted as an approximation, not a precise nor comprehensive total.
In recent months, about 5% of tests are missing addresses and therefore cannot be included in any neighborhood totals. In earlier months, more tests were missing address data. Because of this high percentage of tests missing resident address data, this neighborhood testing data for March, April, and May should be interpreted with caution (see below)
Percentage of tests missing address information, by month in 2020 Mar - 33.6% Apr - 25.9% May - 11.1% Jun - 7.2% Jul - 5.8% Aug - 5.4% Sep - 5.1% Oct (Oct 1-12) - 5.1%
To protect the privacy of residents, the City does not disclose the number of tests in neighborhoods with resident populations of fewer than 1,000 people. These neighborhoods are omitted from the data (they include Golden Gate Park, John McLaren Park, and Lands End).
Tests for residents that listed a Skilled Nursing Facility as their home address are not included in this neighborhood-level testing data. Skilled Nursing Facilities have required and repeated testing of residents, which would change neighborhood trends and not reflect the broader neighborhood's testing data.
This data was de-duplicated by individual and date, so if a person gets tested multiple times on different dates, all tests will be included in this dataset (on the day each test was collected).
The total number of positive test results is not equal to the total number of COVID-19 cases in San Francisco. During this investigation, some test results are found to be for persons living outside of San Francisco and some people in San Francisco may be tested multiple times (which is common). To see the number of new confirmed cases by neighborhood, reference this map: https://sf.gov/data/covid-19-case-maps#new-cases-maps
B. HOW THE DATASET IS CREATED COVID-19 laboratory test data is based on electronic laboratory test reports. Deduplication, quality assurance measures and other data verification processes maximize accuracy of laboratory test information. All testing data is then geo-coded by resident address. Then data is aggregated by analysis neighborhood and specimen collection date.
Data are prepared by close of business Monday through Saturday for public display.
C. UPDATE PROCESS Updates automatically at 05:00 Pacific Time each day. Redundant runs are scheduled at 07:00 and 09:00 in case of pipeline failure.
D. HOW TO USE THIS DATASET San Francisco population estimates for geographic regions can be found in a view based on the San Francisco Population and Demographic Census dataset. These population estimates are from the 2016-2020 5-year American Community Survey (ACS).
Due to the high degree of variation in the time needed to complete tests by different labs there is a delay in this reporting. On March 24 the Health Officer ordered all labs in the City to report complete COVID-19 testing information to the local and state health departments.
In order to track trends over time, a data user can analyze this data by "specimen_collection_date".
Calculating Percent Positivity: The positivity rate is the percentage of tests that return a positive result for COVID-19 (positive tests divided by the sum of positive and negative tests). Indeterminate results, which could not conclusively determine whether COVID-19 virus was present, are not included in the calculation of percent positive. Percent positivity indicates how widespread COVID-19 is in San Francisco and it helps public health officials determine if we are testing enough given the number of people who are testing positive. When there are fewer than 20 positives tests for a given neighborhood and time period, the positivity rate is not calculated for the public tracker because rates of small test counts are less reliable.
Calculating Testing Rates: To calculate the testing rate per 10,000 residents, divide the total number of tests collected (positive, negative, and indeterminate results) for neighborhood by the total number of residents who live in that neighborhood (included in the dataset), then multiply by 10,000. When there are fewer than 20 total tests for a given neighborhood and time period, the testing rate is not calculated for the public tracker because rates of small test counts are less reliable.
Read more about how this data is updated and validated daily: https://sf.gov/information/covid-19-data-questions
E. CHANGE LOG
This graffiti-centred change detection dataset was developed in the context of INDIGO, a research project focusing on the documentation, analysis and dissemination of graffiti along Vienna's Donaukanal. The dataset aims to support the development and assessment of change detection algorithms.
The dataset was collected from a test site approximately 50 meters in length along Vienna's Donaukanal during 11 days between 2022/10/21 and 2022/12/01. Various cameras with different settings were used, resulting in a total of 29 data collection sessions or "epochs" (see "EpochIDs.jpg" for details). Each epoch contains 17 images generated from 29 distinct 3D models with different textures. In total, the dataset comprises 6,902 unique image pairs, along with corresponding reference change maps. Additionally, exclusion masks are provided to ignore parts of the scene that might be irrelevant, such as the background.
To summarise, the dataset, labelled as "Data.zip," includes the following:
Image acquisition involved the use of two different camera setups. The first two datasets (ID 1 and 2; cf. "EpochIDs.jpg") were obtained using a Nikon Z 7II camera with a pixel count of 45.4 MP, paired with a Nikon NIKKOR Z 20 mm lens. For the remaining image datasets (ID 3-29), a triple GoPro setup was employed. This triple setup featured three GoPro cameras, comprising two GoPro HERO 10 cameras and one GoPro HERO 11, all securely mounted within a frame. This triple-camera setup was utilised on nine different days with varying camera settings, resulting in the acquisition of 27 image datasets in total (nine days with three datasets each).
The "Data.zip" file contains two subfolders:
A detailed dataset description (including detailed explanations of the data creation) is part of a journal paper currently in preparation. The paper will be linked here for further clarification as soon as it is available.
Due to the nature of the three image types, this dataset comes with two licenses:
Every synthetic image, change map and mask has this licensing information embedded as IPTC photo metadata. In addition, the images' IPTC metadata also provide a short image description, the image creator and the creator's identity (in the form of an ORCiD).
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
If there are any questions, problems or suggestions for the dataset or the description, please do not hesitate to contact the corresponding author, Benjamin Wild.
On the continental scale, climate is an important determinant of the distributions of plant taxa and ecoregions. To quantify and depict the relations between specific climate variables and these distributions, we placed modern climate and plant taxa distribution data on an approximately 25-kilometer (km) equal-area grid with 27,984 points that cover Canada and the continental United States (Thompson and others, 2015). The gridded climatic data include annual and monthly temperature and precipitation, as well as bioclimatic variables (growing degree days, mean temperatures of the coldest and warmest months, and a moisture index) based on 1961-1990 30-year mean values from the University of East Anglia (UK) Climatic Research Unit (CRU) CL 2.0 dataset (New and others, 2002), and absolute minimum and maximum temperatures for 1951-1980 interpolated from climate-station data (WeatherDisc Associates, 1989). As described below, these data were used to produce portions of the "Atlas of relations between climatic parameters and distributions of important trees and shrubs in North America" (hereafter referred to as "the Atlas"; Thompson and others, 1999a, 1999b, 2000, 2006, 2007, 2012a, 2015). Evolution of the Atlas Over the 16 Years Between Volumes A & B and G: The Atlas evolved through time as technology improved and our knowledge expanded. The climate data employed in the first five Atlas volumes were replaced by more standard and better documented data in the last two volumes (Volumes F and G; Thompson and others, 2012a, 2015). Similarly, the plant distribution data used in Volumes A through D (Thompson and others, 1999a, 1999b, 2000, 2006) were improved for the latter volumes. However, the digitized ecoregion boundaries used in Volume E (Thompson and others, 2007) remain unchanged. Also, as we and others used the data in Atlas Volumes A through E, we came to realize that the plant distribution and climate data for areas south of the US-Mexico border were not of sufficient quality or resolution for our needs and these data are not included in this data release. The data in this data release are provided in comma-separated values (.csv) files. We also provide netCDF (.nc) files containing the climate and bioclimatic data, grouped taxa and species presence-absence data, and ecoregion assignment data for each grid point (but not the country, state, province, and county assignment data for each grid point, which are available in the .csv files). The netCDF files contain updated Albers conical equal-area projection details and more precise grid-point locations. When the original approximately 25-km equal-area grid was created (ca. 1990), it was designed to be registered with existing data sets, and only 3 decimal places were recorded for the grid-point latitude and longitude values (these original 3-decimal place latitude and longitude values are in the .csv files). In addition, the Albers conical equal-area projection used for the grid was modified to match projection irregularities of the U.S. Forest Service atlases (e.g., Little, 1971, 1976, 1977) from which plant taxa distribution data were digitized. For the netCDF files, we have updated the Albers conical equal-area projection parameters and recalculated the grid-point latitudes and longitudes to 6 decimal places. The additional precision in the location data produces maximum differences between the 6-decimal place and the original 3-decimal place values of up to 0.00266 degrees longitude (approximately 143.8 m along the projection x-axis of the grid) and up to 0.00123 degrees latitude (approximately 84.2 m along the projection y-axis of the grid). The maximum straight-line distance between a three-decimal-point and six-decimal-point grid-point location is 144.2 m. Note that we have not regridded the elevation, climate, grouped taxa and species presence-absence data, or ecoregion data to the locations defined by the new 6-decimal place latitude and longitude data. For example, the climate data described in the Atlas publications were interpolated to the grid-point locations defined by the original 3-decimal place latitude and longitude values. Interpolating the data to the 6-decimal place latitude and longitude values would in many cases not result in changes to the reported values and for other grid points the changes would be small and insignificant. Similarly, if the digitized Little (1971, 1976, 1977) taxa distribution maps were regridded using the 6-decimal place latitude and longitude values, the changes to the gridded distributions would be minor, with a small number of grid points along the edge of a taxa's digitized distribution potentially changing value from taxa "present" to taxa "absent" (or vice versa). These changes should be considered within the spatial margin of error for the taxa distributions, which are based on hand-drawn maps with the distributions evidently generalized, or represented by a small, filled circle, and these distributions were subsequently hand digitized. Users wanting to use data that exactly match the data in the Atlas volumes should use the 3-decimal place latitude and longitude data provided in the .csv files in this data release to represent the center point of each grid cell. Users for whom an offset of up to 144.2 m from the original grid-point location is acceptable (e.g., users investigating continental-scale questions) or who want to easily visualize the data may want to use the data associated with the 6-decimal place latitude and longitude values in the netCDF files. The variable names in the netCDF files generally match those in the data release .csv files, except where the .csv file variable name contains a forward slash, colon, period, or comma (i.e., "/", ":", ".", or ","). In the netCDF file variable short names, the forward slashes are replaced with an underscore symbol (i.e., "_") and the colons, periods, and commas are deleted. In the netCDF file variable long names, the punctuation in the name matches that in the .csv file variable names. The "country", "state, province, or territory", and "county" data in the .csv files are not included in the netCDF files. Data included in this release: - Geographic scope. The gridded data cover an area that we labelled as "CANUSA", which includes Canada and the USA (excluding Hawaii, Puerto Rico, and other oceanic islands). Note that the maps displayed in the Atlas volumes are cropped at their northern edge and do not display the full northern extent of the data included in this data release. - Elevation. The elevation data were regridded from the ETOPO5 data set (National Geophysical Data Center, 1993). There were 35 coastal grid points in our CANUSA study area grid for which the regridded elevations were below sea level and these grid points were assigned missing elevation values (i.e., elevation = 9999). The grid points with missing elevation values occur in five coastal areas: (1) near San Diego (California, USA; 1 grid point), (2) Vancouver Island (British Columbia, Canada) and the Olympic Peninsula (Washington, USA; 2 grid points), (3) the Haida Gwaii (formerly Queen Charlotte Islands, British Columbia, Canada) and southeast Alaska (USA, 9 grid points), (4) the Canadian Arctic Archipelago (22 grid points), and (5) Newfoundland (Canada; 1 grid point). - Climate. The gridded climatic data provided here are based on the 1961-1990 30-year mean values from the University of East Anglia (UK) Climatic Research Unit (CRU) CL 2.0 dataset (New and others, 2002), and include annual and monthly temperature and precipitation. The CRU CL 2.0 data were interpolated onto the approximately 25-km grid using geographically-weighted regression, incorporating local lapse-rate estimation and correction. Additional bioclimatic variables (growing degree days on a 5 degrees Celsius base, mean temperatures of the coldest and warmest months, and a moisture index calculated as actual evapotranspiration divided by potential evapotranspiration) were calculated using the interpolated CRU CL 2.0 data. Also included are absolute minimum and maximum temperatures for 1951-1980 interpolated in a similar fashion from climate-station data (WeatherDisc Associates, 1989). These climate and bioclimate data were used in Atlas volumes F and G (see Thompson and others, 2015, for a description of the methods used to create the gridded climate data). Note that for grid points with missing elevation values (i.e., elevation values equal to 9999), climate data were created using an elevation value of -120 meters. Users may want to exclude these climate data from their analyses (see the Usage Notes section in the data release readme file). - Plant distributions. The gridded plant distribution data align with Atlas volume G (Thompson and others, 2015). Plant distribution data on the grid include 690 species, as well as 67 groups of related species and genera, and are based on U.S. Forest Service atlases (e.g., Little, 1971, 1976, 1977), regional atlases (e.g., Benson and Darrow, 1981), and new maps based on information available from herbaria and other online and published sources (for a list of sources, see Tables 3 and 4 in Thompson and others, 2015). See the "Notes" column in Table 1 (https://pubs.usgs.gov/pp/p1650-g/table1.html) and Table 2 (https://pubs.usgs.gov/pp/p1650-g/table2.html) in Thompson and others (2015) for important details regarding the species and grouped taxa distributions. - Ecoregions. The ecoregion gridded data are the same as in Atlas volumes D and E (Thompson and others, 2006, 2007), and include three different systems, Bailey's ecoregions (Bailey, 1997, 1998), WWF's ecoregions (Ricketts and others, 1999), and Kuchler's potential natural vegetation regions (Kuchler, 1985), that are each based on distinctive approaches to categorizing ecoregions. For the Bailey and WWF ecoregions for North America and the Kuchler potential natural vegetation regions for the contiguous United States (i.e.,
Last Revised: February 2016
Map Information
This nowCOAST™ time-enabled map service provides maps depicting the
latest global forecast guidance of water currents, water temperature, and
salinity at forecast projections: 0, 12, 24, 36, 48, 60, 72, 84, and 96-hours
from the NWS/NCEP Global Real-Time Ocean Forecast System (GRTOFS). The surface
water currents velocity maps display the direction using white or black
streaklets. The magnitude of the current is indicated by the length and width
of the streaklet. The maps of the GRTOFS surface forecast guidance are updated
on the nowCOAST™ map service once per day.
For more detailed information about layer update frequency and timing, please reference the
nowCOAST™ Dataset Update Schedule.
Background Information
GRTOFS is based on the Hybrid Coordinates Ocean Model (HYCOM), an eddy resolving, hybrid coordinate numerical ocean prediction model. GRTOFS has global coverge and a horizontal resolution of 1/12 degree and 32 hybrid vertical layers. It has one forecast cycle per day (i.e. 0000 UTC) which generates forecast guidance out to 144 hours (6 days). However, nowCOAST™ only provides guidance out to 96 hours (4 days). The forecast cycle uses 3-hourly momentum and radiation fluxes along with precipitation predictions from the NCEP Global Forecast System (GFS). Each forecast cycle is preceded with a 48-hr long nowcast cycle. The nowcast cycle uses daily initial 3-D fields from the NAVOCEANO operational HYCOM-based forecast system which assimilates situ profiles of temperature and salinity from a variety of sources and remotely sensed SST, SSH and sea-ice concentrations. GRTOFS was developed by NCEP/EMC/Marine Modeling and Analysis Branch. GRTOFS is run once per day (0000 UTC forecast cycle) on the NOAA Weather and Climate Operational Supercomputer System (WCOSS) operated by NWS/NCEP Central Operations.
The maps are generated using a visualization technique developed by the Data Visualization Research Lab at The University of New Hampshire's Center for Coastal and Ocean Mapping (http://www.ccom.unh.edu/vislab/). The method combines two techniques. First, equally spaced streamlines are computed in the flow field using Jobard and Lefer's (1977) algorithm. Second, a series of "streaklets" are rendered head to tail along each streamline to show the direction of flow. Each of these varies along its length in size, color and transparency using a method developed by Fowler and Ware (1989), and later refined by Mr. Pete Mitchell and Dr. Colin Ware (Mitchell, 2007).
Time Information
This map service is time-enabled, meaning that each individual layer contains time-varying data and can be utilized by clients capable of making map requests that include a time component.
In addition to ArcGIS Server REST access, time-enabled OGC WMS 1.3.0 access is also provided by this service.
This particular service can be queried with or without the use of a time component. If the time parameter is specified in a request, the data or imagery most relevant to the provided time value, if any, will be returned. If the time parameter is not specified in a request, the latest data or imagery valid for the present system time will be returned to the client. If the time parameter is not specified and no data or imagery is available for the present time, no data will be returned.
This service is configured with time coverage support, meaning that the service will always return the most relevant available data, if any, to the specified time value. For example, if the service contains data valid today at 12:00 and 12:10 UTC, but a map request specifies a time value of today at 12:07 UTC, the data valid at 12:10 UTC will be returned to the user. This behavior allows more flexibility for users, especially when displaying multiple time-enabled layers together despite slight differences in temporal resolution or update frequency.
When interacting with this time-enabled service, only a single instantaneous time value should be specified in each request. If instead a time range is specified in a request (i.e. separate start time and end time values are given), the data returned may be different than what was intended.
Care must be taken to ensure the time value specified in each request falls within the current time coverage of the service. Because this service is frequently updated as new data becomes available, the user must periodically determine the service's time extent. However, due to software limitations, the time extent of the service and map layers as advertised by ArcGIS Server does not always provide the most up-to-date start and end times of available data. Instead, users have three options for determining the latest time extent of the service:
Issue a returnUpdates=true request (ArcGIS REST protocol only)
for an individual layer or for the service itself, which will return
the current start and end times of available data, in epoch time format
(milliseconds since 00:00 January 1, 1970). To see an example, click on
the "Return Updates" link at the bottom of the REST Service page under
"Supported Operations". Refer to the
ArcGIS REST API Map Service Documentation
for more information.
Issue an Identify (ArcGIS REST) or GetFeatureInfo (WMS) request against
the proper layer corresponding with the target dataset. For raster
data, this would be the "Image Footprints with Time Attributes" layer
in the same group as the target "Image" layer being displayed. For
vector (point, line, or polygon) data, the target layer can be queried
directly. In either case, the attributes returned for the matching
raster(s) or vector feature(s) will include the following:
validtime: Valid timestamp.
starttime: Display start time.
endtime: Display end time.
reftime: Reference time (sometimes referred to as
issuance time, cycle time, or initialization time).
projmins: Number of minutes from reference time to valid
time.
desigreftime: Designated reference time; used as a
common reference time for all items when individual reference
times do not match.
desigprojmins: Number of minutes from designated
reference time to valid time.
Query the nowCOAST™ LayerInfo web service, which has been created to
provide additional information about each data layer in a service,
including a list of all available "time stops" (i.e. "valid times"),
individual timestamps, or the valid time of a layer's latest available
data (i.e. "Product Time"). For more information about the LayerInfo
web service, including examples of various types of requests, refer to
the
nowCOAST™ LayerInfo Help Documentation
References
Fowler, D. and C. Ware, 1989: Strokes for Representing Vector Field Maps. Proceedings: Graphics Interface '98 249-253. Jobard, B and W. Lefer,1977: Creating evenly spaced streamlines of arbitrary density. Proceedings: Eurographics workshop on Visualization in Scientific Computing. 43-55. Mitchell, P.W., 2007: The Perceptual optimization of 2D Flow Visualizations Using Human in the Loop Local Hill Climbing. University of New Hampshire Masters Thesis. Department of Computer Science. NWS, 2013: About Global RTOFS, NCEP/EMC/MMAB, College Park, MD (Available at http://polar.ncep.noaa.gov/global/about/). Chassignet, E.P., H.E. Hurlburt, E.J. Metzger, O.M. Smedstad, J. Cummings, G.R. Halliwell, R. Bleck, R. Baraille, A.J. Wallcraft, C. Lozano, H.L. Tolman, A. Srinivasan, S. Hankin, P. Cornillon, R. Weisberg, A. Barth, R. He, F. Werner, and J. Wilkin, 2009: U.S. GODAE: Global Ocean Prediction with the HYbrid Coordinate Ocean Model (HYCOM). Oceanography, 22(2), 64-75. Mehra, A, I. Rivin, H. Tolman, T. Spindler, and B. Balasubramaniyan, 2011: A Real-Time Operational Global Ocean Forecast System, Poster, GODAE OceanView –GSOP-CLIVAR Workshop in Observing System Evaluation and Intercomparisons, Santa Cruz, CA.
Overview The Office of the Geographer and Global Issues at the U.S. Department of State produces the Large Scale International Boundaries (LSIB) dataset. The current edition is version 11.4 (published 24 February 2025). The 11.4 release contains updated boundary lines and data refinements designed to extend the functionality of the dataset. These data and generalized derivatives are the only international boundary lines approved for U.S. Government use. The contents of this dataset reflect U.S. Government policy on international boundary alignment, political recognition, and dispute status. They do not necessarily reflect de facto limits of control. National Geospatial Data Asset This dataset is a National Geospatial Data Asset (NGDAID 194) managed by the Department of State. It is a part of the International Boundaries Theme created by the Federal Geographic Data Committee. Dataset Source Details Sources for these data include treaties, relevant maps, and data from boundary commissions, as well as national mapping agencies. Where available and applicable, the dataset incorporates information from courts, tribunals, and international arbitrations. The research and recovery process includes analysis of satellite imagery and elevation data. Due to the limitations of source materials and processing techniques, most lines are within 100 meters of their true position on the ground. Cartographic Visualization The LSIB is a geospatial dataset that, when used for cartographic purposes, requires additional styling. The LSIB download package contains example style files for commonly used software applications. The attribute table also contains embedded information to guide the cartographic representation. Additional discussion of these considerations can be found in the Use of Core Attributes in Cartographic Visualization section below. Additional cartographic information pertaining to the depiction and description of international boundaries or areas of special sovereignty can be found in Guidance Bulletins published by the Office of the Geographer and Global Issues: https://data.geodata.state.gov/guidance/index.html Contact Direct inquiries to internationalboundaries@state.gov. Direct download: https://data.geodata.state.gov/LSIB.zip Attribute Structure The dataset uses the following attributes divided into two categories: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | Core CC1_GENC3 | Extension CC1_WPID | Extension COUNTRY1 | Core CC2 | Core CC2_GENC3 | Extension CC2_WPID | Extension COUNTRY2 | Core RANK | Core LABEL | Core STATUS | Core NOTES | Core LSIB_ID | Extension ANTECIDS | Extension PREVIDS | Extension PARENTID | Extension PARENTSEG | Extension These attributes have external data sources that update separately from the LSIB: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | GENC CC1_GENC3 | GENC CC1_WPID | World Polygons COUNTRY1 | DoS Lists CC2 | GENC CC2_GENC3 | GENC CC2_WPID | World Polygons COUNTRY2 | DoS Lists LSIB_ID | BASE ANTECIDS | BASE PREVIDS | BASE PARENTID | BASE PARENTSEG | BASE The core attributes listed above describe the boundary lines contained within the LSIB dataset. Removal of core attributes from the dataset will change the meaning of the lines. An attribute status of “Extension” represents a field containing data interoperability information. Other attributes not listed above include “FID”, “Shape_length” and “Shape.” These are components of the shapefile format and do not form an intrinsic part of the LSIB. Core Attributes The eight core attributes listed above contain unique information which, when combined with the line geometry, comprise the LSIB dataset. These Core Attributes are further divided into Country Code and Name Fields and Descriptive Fields. County Code and Country Name Fields “CC1” and “CC2” fields are machine readable fields that contain political entity codes. These are two-character codes derived from the Geopolitical Entities, Names, and Codes Standard (GENC), Edition 3 Update 18. “CC1_GENC3” and “CC2_GENC3” fields contain the corresponding three-character GENC codes and are extension attributes discussed below. The codes “Q2” or “QX2” denote a line in the LSIB representing a boundary associated with areas not contained within the GENC standard. The “COUNTRY1” and “COUNTRY2” fields contain the names of corresponding political entities. These fields contain names approved by the U.S. Board on Geographic Names (BGN) as incorporated in the ‘"Independent States in the World" and "Dependencies and Areas of Special Sovereignty" lists maintained by the Department of State. To ensure maximum compatibility, names are presented without diacritics and certain names are rendered using common cartographic abbreviations. Names for lines associated with the code "Q2" are descriptive and not necessarily BGN-approved. Names rendered in all CAPITAL LETTERS denote independent states. Names rendered in normal text represent dependencies, areas of special sovereignty, or are otherwise presented for the convenience of the user. Descriptive Fields The following text fields are a part of the core attributes of the LSIB dataset and do not update from external sources. They provide additional information about each of the lines and are as follows: ATTRIBUTE NAME | CONTAINS NULLS RANK | No STATUS | No LABEL | Yes NOTES | Yes Neither the "RANK" nor "STATUS" fields contain null values; the "LABEL" and "NOTES" fields do. The "RANK" field is a numeric expression of the "STATUS" field. Combined with the line geometry, these fields encode the views of the United States Government on the political status of the boundary line. ATTRIBUTE NAME | | VALUE | RANK | 1 | 2 | 3 STATUS | International Boundary | Other Line of International Separation | Special Line A value of “1” in the “RANK” field corresponds to an "International Boundary" value in the “STATUS” field. Values of ”2” and “3” correspond to “Other Line of International Separation” and “Special Line,” respectively. The “LABEL” field contains required text to describe the line segment on all finished cartographic products, including but not limited to print and interactive maps. The “NOTES” field contains an explanation of special circumstances modifying the lines. This information can pertain to the origins of the boundary lines, limitations regarding the purpose of the lines, or the original source of the line. Use of Core Attributes in Cartographic Visualization Several of the Core Attributes provide information required for the proper cartographic representation of the LSIB dataset. The cartographic usage of the LSIB requires a visual differentiation between the three categories of boundary lines. Specifically, this differentiation must be between: International Boundaries (Rank 1); Other Lines of International Separation (Rank 2); and Special Lines (Rank 3). Rank 1 lines must be the most visually prominent. Rank 2 lines must be less visually prominent than Rank 1 lines. Rank 3 lines must be shown in a manner visually subordinate to Ranks 1 and 2. Where scale permits, Rank 2 and 3 lines must be labeled in accordance with the “Label” field. Data marked with a Rank 2 or 3 designation does not necessarily correspond to a disputed boundary. Please consult the style files in the download package for examples of this depiction. The requirement to incorporate the contents of the "LABEL" field on cartographic products is scale dependent. If a label is legible at the scale of a given static product, a proper use of this dataset would encourage the application of that label. Using the contents of the "COUNTRY1" and "COUNTRY2" fields in the generation of a line segment label is not required. The "STATUS" field contains the preferred description for the three LSIB line types when they are incorporated into a map legend but is otherwise not to be used for labeling. Use of the “CC1,” “CC1_GENC3,” “CC2,” “CC2_GENC3,” “RANK,” or “NOTES” fields for cartographic labeling purposes is prohibited. Extension Attributes Certain elements of the attributes within the LSIB dataset extend data functionality to make the data more interoperable or to provide clearer linkages to other datasets. The fields “CC1_GENC3” and “CC2_GENC” contain the corresponding three-character GENC code to the “CC1” and “CC2” attributes. The code “QX2” is the three-character counterpart of the code “Q2,” which denotes a line in the LSIB representing a boundary associated with a geographic area not contained within the GENC standard. To allow for linkage between individual lines in the LSIB and World Polygons dataset, the “CC1_WPID” and “CC2_WPID” fields contain a Universally Unique Identifier (UUID), version 4, which provides a stable description of each geographic entity in a boundary pair relationship. Each UUID corresponds to a geographic entity listed in the World Polygons dataset. These fields allow for linkage between individual lines in the LSIB and the overall World Polygons dataset. Five additional fields in the LSIB expand on the UUID concept and either describe features that have changed across space and time or indicate relationships between previous versions of the feature. The “LSIB_ID” attribute is a UUID value that defines a specific instance of a feature. Any change to the feature in a lineset requires a new “LSIB_ID.” The “ANTECIDS,” or antecedent ID, is a UUID that references line geometries from which a given line is descended in time. It is used when there is a feature that is entirely new, not when there is a new version of a previous feature. This is generally used to reference countries that have dissolved. The “PREVIDS,” or Previous ID, is a UUID field that contains old versions of a line. This is an additive field, that houses all Previous IDs. A new version of a feature is defined by any change to the
Soil is the foundation of life on earth. More living things by weight live in the soil than upon it. It determines what crops we can grow, what structures we can build, what forests can take root.This layer contains the chemical soil variable nitrogen (nitrogen).Nitrogen is an essential nutrient for sustaining life on Earth. Nitrogen is a core component of amino acids, which are the building blocks of proteins, and of nucleic acids, which are the building blocks of genetic material (RNA and DNA).This layer is a general, medium scale global predictive soil layer suitable for global mapping and decision support. In many places samples of soils do not exist so this map represents a prediction of what is most likely in that location. The predictions are made in six depth ranges by soilgrids.org, funded by ISRIC based in Wageningen, Netherlands.Each 250m pixel contains a value predicted for that area by soilgrids.org from best available data worldwide. Data for nitrogen are provided at six depth ranges from the surface to 2 meters below the surface. Each variable and depth range may be accessed in the layer's multidimensional properties.Dataset SummaryPhenomenon Mapped: Total nitrogen (N) in g/kgCell Size: 250 metersPixel Type: 32 bit float, converted from online data that is 16 Bit Unsigned IntegerCoordinate System: Web Mercator Auxiliary Sphere, projected via nearest neighbor from goode's homolosine land (250m)Extent: World land area except AntarcticaVisible Scale: All scales are visibleNumber of Columns and Rows: 160300, 100498Source: Soilgrids.orgPublication Date: May 2020Data from the soilgrids.org mean predictions for nitrogen were used to create this layer. You may access nitrogen values in one of six depth ranges. To select one choose the depth variable in the multidimensional selector in your map client.Mean depth (cm)Actual depth range of data-2.50-5cm depth range-105-15cm depth range-22.515-30cm depth range-4530-60cm depth range-8060-100cm depth range-150100-200cm depth rangeWhat can you do with this Layer?This layer is suitable for both visualization and analysis across the ArcGIS system. This layer can be combined with your data and other layers from the ArcGIS Living Atlas of the World in ArcGIS Online and ArcGIS Pro to create powerful web maps that can be used alone or in a story map or other application.Because this layer is part of the ArcGIS Living Atlas of the World it is easy to add to your map: In ArcGIS Online, you can add this layer to a map by selecting Add then Browse Living Atlas Layers. A window will open. Type "world soils soilgrids" in the search box and browse to the layer. Select the layer then click Add to Map. In ArcGIS Pro, open a map and select Add Data from the Map Tab. Select Data at the top of the drop down menu. The Add Data dialog box will open on the left side of the box, expand Portal if necessary, then select Living Atlas. Type "world soils soilgrids" in the search box, browse to the layer then click OK.In ArcGIS Pro you can use the built-in raster functions or create your own to create custom extracts of the data. Imagery layers provide fast, powerful inputs to geoprocessing tools, models, or Python scripts in Pro.Online you can filter the layer to show subsets of the data using the filter button and the layer's built-in raster functions.This layer is part of the Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.More information about soilgrids layersAnswers to many questions may be found at soilgrids.org (ISRIC) frequently asked questions (faq) page about the data.To make this layer, Esri reprojected the expected value of ISRIC soil grids from soilgrids' source projection (goode's land WKID 54052) to web mercator projection, nearest neighbor, to facilitate online mapping. The resolution in web mercator projection is the same as the original projection, 250m. But keep in mind that the original dataset has been reprojected to make this web mercator version.This multidimensional soil collection serves the mean or expected value for each soil variable as calculated by soilgrids.org. For all other distributions of the soil variable, be sure to download the data directly from soilgrids.org. The data are available in VRT format and may be converted to other image formats within ArcGIS Pro.Accessing this layer's companion uncertainty layerBecause data quality varies worldwide, the uncertainty of the predicted value varies worldwide. A companion uncertainty layer exists for this layer which you can use to qualify the values you see in this map for analysis. Choose a variable and depth in the multidimensional settings of your map client to access the companion uncertainty layer.
The National Hydrography Dataset Plus High Resolution (NHDplus High Resolution) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US Geological Survey, NHDPlus High Resolution provides mean annual flow and velocity estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses.For more information on the NHDPlus High Resolution dataset see the User’s Guide for the National Hydrography Dataset Plus (NHDPlus) High Resolution.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territoriesGeographic Extent: The Contiguous United States, Hawaii, portions of Alaska, Puerto Rico, Guam, US Virgin Islands, Northern Marianas Islands, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: USGSUpdate Frequency: AnnualPublication Date: July 2022This layer was symbolized in the ArcGIS Map Viewer and while the features will draw in the Classic Map Viewer the advanced symbology will not. Prior to publication, the network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original dataset. No data values -9999 and -9998 were converted to Null values.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute.Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map.Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class.Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.