46 datasets found
  1. d

    Data from: Data and Results for GIS-Based Identification of Areas that have...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Data and Results for GIS-Based Identification of Areas that have Resource Potential for Lode Gold in Alaska [Dataset]. https://catalog.data.gov/dataset/data-and-results-for-gis-based-identification-of-areas-that-have-resource-potential-for-lo
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Alaska
    Description

    This data release contains the analytical results and evaluated source data files of geospatial analyses for identifying areas in Alaska that may be prospective for different types of lode gold deposits, including orogenic, reduced-intrusion-related, epithermal, and gold-bearing porphyry. The spatial analysis is based on queries of statewide source datasets of aeromagnetic surveys, Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. LodeGold_Results_gdb.zip - The analytical results in geodatabase polygon feature classes which contain the scores for each source dataset layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for a deposit type within the HUC. The data is described by FGDC metadata. An mxd file, and cartographic feature classes are provided for display of the results in ArcMap. An included README file describes the complete contents of the zip file. 2. LodeGold_Results_shape.zip - Copies of the results from the geodatabase are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file. 3. LodeGold_SourceData_gdb.zip - The source datasets in geodatabase and geotiff format. Data layers include aeromagnetic surveys, AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds. The data is described by FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are the python scripts used to perform the analyses. Users may modify the scripts to design their own analyses. The included README files describe the complete contents of the zip file and explain the usage of the scripts. 4. LodeGold_SourceData_shape.zip - Copies of the geodatabase source dataset derivatives from ARDF and lithology from SIM3340 created for this analysis are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file.

  2. Data from: HarDWR - Harmonized Water Rights Records

    • osti.gov
    Updated Apr 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    USDOE Office of Science (SC), Biological and Environmental Research (BER) (2024). HarDWR - Harmonized Water Rights Records [Dataset]. http://doi.org/10.57931/2341234
    Explore at:
    Dataset updated
    Apr 25, 2024
    Dataset provided by
    United States Department of Energyhttp://energy.gov/
    MultiSector Dynamics - Living, Intuitive, Value-adding, Environment
    Description

    For a detailed description of the database of which this record is only one part, please see the HarDWR meta-record. Here we present a new dataset of western U.S. water rights records. This dataset provides consistent unique identifiers for each spatial unit of water management across the domain, unique identifiers for each water right record, and a consistent categorization scheme that puts each water right record into one of 7 broad use categories. These data were instrumental in conducting a study of the multi-sector dynamics of intersectoral water allocation changes through water markets (Grogan et al., in review). Specifically, the data were formatted for use as input to a process-based hydrologic model, WBM, with a water rights module (Grogan et al., in review). While this specific study motivated the development of the database presented here, U.S. west water management is a rich area of study (e.g., Anderson and Woosly, 2005; Tidwell, 2014; Null and Prudencio, 2016; Carney et al, 2021) so releasing this database publicly with documentation and usage notes will enable other researchers to do further work on water management in the U.S. west. The raw downloaded data for each state is described in Lisk et al. (in review), as well as here. The dataset is a series of various files organized by state sub-directories. The first two characters of each file name is the abbreviation for the state the in which the file contains data for. After the abbreviation is the text which describes the contents of the file. Here is each file type described in detail: XXFullHarmonizedRights.csv: A file of the combined groundwater and surface water records for each state. Essentially, this file is the merging of XXGroundwaterHarmonizedRights.csv and XXSurfaceWaterHarmonizedRights.csv by state. The column headers for each of this type of file are: state - The name of the state the data comes from. FIPS - The two-digit numeric state ID code. waterRightID - The unique identifying ID of the water right, the same identifier as its state uses. priorityDate - The priority date associated with the right. origWaterUse - The original stated water use(s) from the state. waterUse - The water use category under the unified use categories established here. source - Whether the right is for surface water or groundwater. basinNum - The alpha-numeric identifier of the WMA the record belongs to. CFS - The maximum flow of the allocation in cubic feet per second (ft3s-1). Arizona is unique among the states, as its surface and groundwater resources are managed with two different sets of boundaries. So, for Arizona, the basinNum column is missing and instead there are two columns: surBasinNum - The alpha-numeric identifier of the surface water WMA the record belongs to. grdBasinNum - The alpha-numeric identifier of the groundwater WMA the record belongs to. XXStatePOD.shp: A shapefile which identifies the location of the Points of Diversion for the state's water rights. It should be noted that not all water right records in XXFullHarmonizedRights.csv have coordinates, and therefore may be missing from this file. XXStatePOU.shp: A shapefile which contains the area(s) in which each water right is claimed to be used. Currently, only Idaho and Washington provided valid data to include within this file. XXGroundwaterHarmonizedRights.csv: A file which contains only harmonized groundwater rights collected from each state. See XXFullHarmonizedRights.csv for more details on how the data is formatted. XXSurfaceWaterHarmonizedRights.csv: A file which contains only harmonized surface water rights collected from each state. See XXFullHarmonizedRights.csv for more details on how the data is formatted. Additionally, one file, stateWMALabels.csv, is not stored within a sub-directory. While we have referred to the spatial boundaries that each state uses to manage its water resources as WMAs, this term is not shared across all states. This file lists the proper name for each boundary set, by state. For those whom may be interested in exploring our code more in depth, we are also making available an internal data file for convenience. The file is in .RData format and contains everything described above as well as some minor additional objects used within the code calculating the cumulative curves. For completeness, here is a detailed description of the various objects which can be found within the .RData file: states: A character vector containing the state names for those states in which data was collected for. More importantly, the index of the state name is also the index in which that state's data can be found in the various following list objects. For example, if California is the third index in this object, the data for California will also be in the third index for each accompanying list. rightsByState_ground: A list of data frames with the cleaned ground water rights collected from each state. This object holds the the data that is exported to created the xxGroundwaterHarmonizedRights.csv files. rightsByState_surface: A list of data frames with the cleaned surface water rights collected from each state. This object holds the the data that is exported to created the xxSurfaceWaterHarmonizedRights.csv files. fullRightsRecs: A list of the combined groundwater and surface water records for each state. This object holds the the data that is exported to created the xxFullHarmonizedRights.csv files. projProj: The spatial projection used for map creation in the beginning of the project. Specifically, the World Geodetic System (WGS84) as a coordinate reference system (CRS) string in PROJ.4 format. wmaStateLabel: The name and/or abbreviation for what each state legally calls their WMAs. h2oUseByState: A list of spatial polygon data frames which contain the area(s) in which each water right is claimed to be used. It should be noted that not all water right records have a listed area(s) of use in this object. Currently, only Idaho and Washington provided valid data to be included in this object. h2oDivByState: A list of spatial points data frames which identifies the location of the Point of Diversion for the state's water rights. It should be noted that not all water right records have a listed Point of Diversion in this object. spatialWMAByState: A list of spatial polygon data frames which contain the spatial WMA boundaries for each state. The only data contained within the table are identifiers for each polygon. It is worth reiterating that Arizona is the only state in which the surface and groundwater WMA boundaries are not the same. wmaIDByState: A list which contains the unique ID values of the WMAs for each state. plottingDim: A character vector used to inform mapping functions for internal map making. Each state is classified as either "tall" or "wide", to maximize space on a typical 8x11 page. The code related to the creation of this dataset can be viewed within HarDWR GitHub Repository/dataHarmonization.

  3. d

    Data and Results for GIS-based Identification of Areas that have Resource...

    • datasets.ai
    • data.usgs.gov
    • +1more
    55
    Updated Aug 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2024). Data and Results for GIS-based Identification of Areas that have Resource Potential for Sediment-hosted Pb-Zn Deposits in Alaska [Dataset]. https://datasets.ai/datasets/data-and-results-for-gis-based-identification-of-areas-thathave-resource-potential-for-sed
    Explore at:
    55Available download formats
    Dataset updated
    Aug 13, 2024
    Dataset authored and provided by
    Department of the Interior
    Description

    This data release contains the analytical results and the evaluated source data files of a geospatial analysis for identifying areas in Alaska that may have potential for sediment-hosted Pb-Zn (lead-zinc) deposits. The spatial analysis is based on queries of statewide source datasets Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. The results in geodatabase format are in SedPbZn_Results_gdb.zip. The analytical results for sediment-hosted Pb-Zn deposits are in a polygon feature class which contains the points scored for each source data layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for sediment-hosted Pb-Zn deposits for each HUC. The data is described by FGDC metadata. An mxd file, layer file, and cartographic feature classes are provided for display of the results in ArcMap. Files sedPbZn_scoring_tables.pdf (list of the scoring parameters for the analysis) and sedPbZn_Results_gdb_README.txt (description of the files in this download package) are included. 2. The results in shapefile format are in SedPbZn_Results_shape.zip. The analytical results for sediment-hosted Pb-Zn deposits are in a polygon feature class which contains the points scored for each source data layer query, the accumulative score, and designation for high, medium, or low potential and high, medium, or low certainty for sediment-hosted Pb-Zn deposits for each HUC. The results are also provided as a CSV file. The data is described by FGDC metadata. Files sedPbZn_scoring_tables.pdf (list of the scoring parameters for the analysis) and sedPbZn_Results_shape_README.txt (description of the files in this download package) are included. 3. The source data in geodatabase format are in SedPbZn_SourceData_gdb.zip. Data layers include AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds, with FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are two python scripts 1) to score the ARDF records based on the presence of certain keywords, and 2) to evaluate the ARDF, AGDB3, and lithology layers for the potential for sediment-hosted Pb-Zn deposits within subwatershed polygons. Users may modify the scripts to design their own analyses. Files sedPbZn_scoring_table.pdf (list of the scoring parameters for the analysis) and sedPbZn_sourcedata_gdb_README.txt (description of the files in this download package) are included. 4. The source data in shapefile and CSV format are in SedPbZn_SourceData_shape.zip. Data layers include ARDF and lithology from SIM3340, and HUC subwatersheds, with FGDC metadata. The ARDF keyword tables available in the geodatabase package are presented here as CSV files. All data files are described with the FGDC metadata. Files sedPb_Zn_scoring_table.pdf (list of the scoring parameters for the analysis) and sedPbZn_sourcedata_shapefile_README.txt (description of the files in this download package) are included. 5. Appendices 2, 3 and 4, which are cited by the larger work OFR2020-1147. Files are presented in XLSX and CSV formats.

  4. o

    Data from: US County Boundaries

    • public.opendatasoft.com
    • smartregionidf.opendatasoft.com
    csv, excel, geojson +1
    Updated Jun 27, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). US County Boundaries [Dataset]. https://public.opendatasoft.com/explore/dataset/us-county-boundaries/
    Explore at:
    json, csv, excel, geojsonAvailable download formats
    Dataset updated
    Jun 27, 2017
    License

    https://en.wikipedia.org/wiki/Public_domainhttps://en.wikipedia.org/wiki/Public_domain

    Area covered
    United States
    Description

    The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line shapefile is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. The primary legal divisions of most states are termed counties. In Louisiana, these divisions are known as parishes. In Alaska, which has no counties, the equivalent entities are the organized boroughs, city and boroughs, municipalities, and for the unorganized area, census areas. The latter are delineated cooperatively for statistical purposes by the State of Alaska and the Census Bureau. In four states (Maryland, Missouri, Nevada, and Virginia), there are one or more incorporated places that are independent of any county organization and thus constitute primary divisions of their states. These incorporated places are known as independent cities and are treated as equivalent entities for purposes of data presentation. The District of Columbia and Guam have no primary divisions, and each area is considered an equivalent entity for purposes of data presentation. The Census Bureau treats the following entities as equivalents of counties for purposes of data presentation: Municipios in Puerto Rico, Districts and Islands in American Samoa, Municipalities in the Commonwealth of the Northern Mariana Islands, and Islands in the U.S. Virgin Islands. The entire area of the United States, Puerto Rico, and the Island Areas is covered by counties or equivalent entities. The boundaries for counties and equivalent entities are as of January 1, 2017, primarily as reported through the Census Bureau's Boundary and Annexation Survey (BAS).

  5. d

    U.S. Select Demographics by Census Block Groups

    • search.dataone.org
    • dataverse.harvard.edu
    • +1more
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bryan, Michael (2023). U.S. Select Demographics by Census Block Groups [Dataset]. http://doi.org/10.7910/DVN/UZGNMM
    Explore at:
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Bryan, Michael
    Area covered
    United States
    Description

    Overview This dataset re-shares cartographic and demographic data from the U.S. Census Bureau to provide an obvious supplement to Open Environments Block Group publications.These results do not reflect any proprietary or predictive model. Rather, they extract from Census Bureau results with some proportions and aggregation rules applied. For additional support or more detail, please see the Census Bureau citations below. Cartographics refer to shapefiles shared in the Census TIGER/Line publications. Block Group areas are updated annually, with major revisions accompanying the Decennial Census at the turn of each decade. These shapes are useful for visualizing estimates as a map and relating geographies based upon geo-operations like overlapping. This data is kept in a geodatabase file format and requires the geopandas package and its supporting fiona and DAL software. Demographics are taken from popular variables in the American Community Survey (ACS) including age, race, income, education and family structure. This data simply requires csv reader software or pythons pandas package. While the demographic data has many columns, the cartographic data has a very, very large column called "geometry" storing the many-point boundaries of each shape. So, this process saves the data separately, with demographics columns in a csv file and geometry in a gpd file needed an installation of geopandas, fiona and DAL software. More details on the ACS variables selected and derivation rules applied can be found in the commentary docstrings in the source code found here: https://github.com/OpenEnvironments/blockgroupdemographics. ## Files While the demographic data has many columns, the cartographic data has a very, very large column called "geometry" storing the many-point boundaries of each shape. So, this process saves the data separately, with demographics columns in a csv file named YYYYblcokgroupdemographics.csv. The cartographic column, 'geometry', is shared as file named YYYYblockgroupdemographics-geometry.pkl. This file needs an installation of geopandas, fiona and DAL software.

  6. U.S. Census Blocks

    • hub.arcgis.com
    • colorado-river-portal.usgs.gov
    • +8more
    Updated Jun 29, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri U.S. Federal Datasets (2021). U.S. Census Blocks [Dataset]. https://hub.arcgis.com/datasets/d795eaa6ee7a40bdb2efeb2d001bf823
    Explore at:
    Dataset updated
    Jun 29, 2021
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri U.S. Federal Datasets
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    U.S. Census BlocksThis feature layer, utilizing National Geospatial Data Asset (NGDA) data from the U.S. Census Bureau (USCB), displays Census Blocks in the United States. A brief description of Census Blocks, per USCB, is that "Census blocks are statistical areas bounded by visible features such as roads, streams, and railroad tracks, and by nonvisible boundaries such as property lines, city, township, school district, county limits and short line-of-sight extensions of roads." Also, "the smallest level of geography you can get basic demographic data for, such as total population by age, sex, and race."Census Block 1007Data currency: This cached Esri federal service is checked weekly for updates from its enterprise federal source (Census Blocks) and will support mapping, analysis, data exports and OGC API – Feature access.NGDAID: 69 (Series Information for 2020 Census Block State-based TIGER/Line Shapefiles, Current)OGC API Features Link: (U.S. Census Blocks - OGC Features) copy this link to embed it in OGC Compliant viewersFor more information, please visit: What are census blocksFor feedback please contact: Esri_US_Federal_Data@esri.comNGDA Data SetThis data set is part of the NGDA Governmental Units, and Administrative and Statistical Boundaries Theme Community. Per the Federal Geospatial Data Committee (FGDC), this theme is defined as the "boundaries that delineate geographic areas for uses such as governance and the general provision of services (e.g., states, American Indian reservations, counties, cities, towns, etc.), administration and/or for a specific purpose (e.g., congressional districts, school districts, fire districts, Alaska Native Regional Corporations, etc.), and/or provision of statistical data (census tracts, census blocks, metropolitan and micropolitan statistical areas, etc.). Boundaries for these various types of geographic areas are either defined through a documented legal description or through criteria and guidelines. Other boundaries may include international limits, those of federal land ownership, the extent of administrative regions for various federal agencies, as well as the jurisdictional offshore limits of U.S. sovereignty. Boundaries associated solely with natural resources and/or cultural entities are excluded from this theme and are included in the appropriate subject themes."For other NGDA Content: Esri Federal Datasets

  7. Location Identifiers, Metadata, and Map for Field Measurements at the...

    • osti.gov
    Updated Jan 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) (United States) (2025). Location Identifiers, Metadata, and Map for Field Measurements at the East-Taylor Watershed Community Observatory, Colorado, USA (Version 3.2) [Dataset]. http://doi.org/10.15485/1660962
    Explore at:
    Dataset updated
    Jan 1, 2025
    Dataset provided by
    Department of Energy Biological and Environmental Research Program
    Office of Sciencehttp://www.er.doe.gov/
    Watershed Function SFA
    Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) (United States)
    Area covered
    Colorado, United States
    Description

    This dataset contains identifiers, metadata, and a map of the locations where field measurements have been conducted at the East-Taylor Watershed Community Observatory located in the Upper Colorado River Basin, United States. This is version 3.2 of the dataset and replaces the prior version 3.1 (see below for details on changes between the versions).Dataset description: The East River-Taylor Watershed is the primary field site of the Watershed Function Scientific Focus Area (WFSFA) and the Rocky Mountain Biological Laboratory. Researchers from several institutions generate highly diverse hydrological, biogeochemical, climate, vegetation, geological, remote sensing, and model data at the East-Taylor Watershed in collaboration with the WFSFA. Thus, the purpose of this dataset is to maintain an inventory of the field locations and instrumentation to provide information on the field activities in the East-Taylor Watershed and coordinate data collected across different locations, researchers, and institutions. The dataset contains (1) a README file with information on the various files, (2) three csv files describing the metadata collected for each surface point location, plot and region registered with the WFSFA, (3) csv files with metadata and contact information for each surface point location registered with the WFSFA, (4) a csv file with with metadata and contact information for plots, (5) a csv file with metadata for geographic regions and sub-regions within the watershed, (6) a compiled xlsx file with all the data and metadata which can be opened in Microsoft Excel, (7) a kml map of the locations plotted in the watershed which can be opened in Google Earth, (8) a jpeg image of the kml map which can be viewed in any photo viewer, and (9) a zipped file with the registration templates used by the SFA team to collect location metadata. The zipped template file contains two csv files with the blank templates (point and plot), two csv files with instructions for filling out the location templates, and one compiled xlsx file with the instructions and blank templates together. Additionally, the templates in the xlsx include drop down validation for any controlled metadata fields. Persistent location identifiers (Location_ID) are determined by the WFSFA data management team and are used to track data and samples across locations.Dataset uses: This location metadata is used to update the Watershed SFA’s publicly accessible Field Information Portal (an interactive field sampling metadata exploration tool; https://wfsfa-data.lbl.gov/watershed/), the kml map file included in this dataset, and other data management tools internal to the Watershed SFA team.Version Information: The latest version of this dataset publication is version 3.2. This version contains 161 new point locations, 12 new plots, and 16 new geographic regions. Overall, there are a total of 1272 point locations, 74 plots, and 52 geographic regions. Additionally, the kml map of locations and image now includes all boundaries contained within the East River watershed (USGS HUC-10) and accompanying stream network. Refer to methods for further details on the version history. This dataset will be updated on a periodic basis with new measurement location information. Researchers interested in having their East-Taylor Watershed measurement locations added to this list should reach out to the WFSFA data management team at wfsfa-data@googlegroups.com.Acknowledgments: Please cite this dataset if using any of the location metadata in other publications or derived products. If using the location metadata for the 2018 NEON hyperspectral campaign, additionally cite Chadwick et al. (2020). doi:10.15485/1618130.This work was supported by the Watershed Function Science Focus Area at Lawrence Berkeley National Laboratory funded by the US Department of Energy, Office of Science, Biological and Environmental Research under Contract No. DE-AC02-05CH11231.Part of this work was performed at SLAC Accelerator Laboratory funded by the US Department of Energy, Office of Science, Biological and Environmental Research under Contract No. DE-AC02-76SF00515.

  8. Z

    TRAVEL: A Dataset with Toolchains for Test Generation and Regression Testing...

    • data.niaid.nih.gov
    Updated Jul 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessio Gambi (2024). TRAVEL: A Dataset with Toolchains for Test Generation and Regression Testing of Self-driving Cars Software [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5911160
    Explore at:
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Annibale Panichella
    Alessio Gambi
    Sebastiano Panichella
    Vincenzo Riccio
    Pouria Derakhshanfar
    Christian Birchler
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction

    This repository hosts the Testing Roads for Autonomous VEhicLes (TRAVEL) dataset. TRAVEL is an extensive collection of virtual roads that have been used for testing lane assist/keeping systems (i.e., driving agents) and data from their execution in state of the art, physically accurate driving simulator, called BeamNG.tech. Virtual roads consist of sequences of road points interpolated using Cubic splines.

    Along with the data, this repository contains instructions on how to install the tooling necessary to generate new data (i.e., test cases) and analyze them in the context of test regression. We focus on test selection and test prioritization, given their importance for developing high-quality software following the DevOps paradigms.

    This dataset builds on top of our previous work in this area, including work on

    test generation (e.g., AsFault, DeepJanus, and DeepHyperion) and the SBST CPS tool competition (SBST2021),

    test selection: SDC-Scissor and related tool

    test prioritization: automated test cases prioritization work for SDCs.

    Dataset Overview

    The TRAVEL dataset is available under the data folder and is organized as a set of experiments folders. Each of these folders is generated by running the test-generator (see below) and contains the configuration used for generating the data (experiment_description.csv), various statistics on generated tests (generation_stats.csv) and found faults (oob_stats.csv). Additionally, the folders contain the raw test cases generated and executed during each experiment (test..json).

    The following sections describe what each of those files contains.

    Experiment Description

    The experiment_description.csv contains the settings used to generate the data, including:

    Time budget. The overall generation budget in hours. This budget includes both the time to generate and execute the tests as driving simulations.

    The size of the map. The size of the squared map defines the boundaries inside which the virtual roads develop in meters.

    The test subject. The driving agent that implements the lane-keeping system under test. The TRAVEL dataset contains data generated testing the BeamNG.AI and the end-to-end Dave2 systems.

    The test generator. The algorithm that generated the test cases. The TRAVEL dataset contains data obtained using various algorithms, ranging from naive and advanced random generators to complex evolutionary algorithms, for generating tests.

    The speed limit. The maximum speed at which the driving agent under test can travel.

    Out of Bound (OOB) tolerance. The test cases' oracle that defines the tolerable amount of the ego-car that can lie outside the lane boundaries. This parameter ranges between 0.0 and 1.0. In the former case, a test failure triggers as soon as any part of the ego-vehicle goes out of the lane boundary; in the latter case, a test failure triggers only if the entire body of the ego-car falls outside the lane.

    Experiment Statistics

    The generation_stats.csv contains statistics about the test generation, including:

    Total number of generated tests. The number of tests generated during an experiment. This number is broken down into the number of valid tests and invalid tests. Valid tests contain virtual roads that do not self-intersect and contain turns that are not too sharp.

    Test outcome. The test outcome contains the number of passed tests, failed tests, and test in error. Passed and failed tests are defined by the OOB Tolerance and an additional (implicit) oracle that checks whether the ego-car is moving or standing. Tests that did not pass because of other errors (e.g., the simulator crashed) are reported in a separated category.

    The TRAVEL dataset also contains statistics about the failed tests, including the overall number of failed tests (total oob) and its breakdown into OOB that happened while driving left or right. Further statistics about the diversity (i.e., sparseness) of the failures are also reported.

    Test Cases and Executions

    Each test..json contains information about a test case and, if the test case is valid, the data observed during its execution as driving simulation.

    The data about the test case definition include:

    The road points. The list of points in a 2D space that identifies the center of the virtual road, and their interpolation using cubic splines (interpolated_points)

    The test ID. The unique identifier of the test in the experiment.

    Validity flag and explanation. A flag that indicates whether the test is valid or not, and a brief message describing why the test is not considered valid (e.g., the road contains sharp turns or the road self intersects)

    The test data are organized according to the following JSON Schema and can be interpreted as RoadTest objects provided by the tests_generation.py module.

    { "type": "object", "properties": { "id": { "type": "integer" }, "is_valid": { "type": "boolean" }, "validation_message": { "type": "string" }, "road_points": { §\label{line:road-points}§ "type": "array", "items": { "$ref": "schemas/pair" }, }, "interpolated_points": { §\label{line:interpolated-points}§ "type": "array", "items": { "$ref": "schemas/pair" }, }, "test_outcome": { "type": "string" }, §\label{line:test-outcome}§ "description": { "type": "string" }, "execution_data": { "type": "array", "items": { "$ref" : "schemas/simulationdata" } } }, "required": [ "id", "is_valid", "validation_message", "road_points", "interpolated_points" ] }

    Finally, the execution data contain a list of timestamped state information recorded by the driving simulation. State information is collected at constant frequency and includes absolute position, rotation, and velocity of the ego-car, its speed in Km/h, and control inputs from the driving agent (steering, throttle, and braking). Additionally, execution data contain OOB-related data, such as the lateral distance between the car and the lane center and the OOB percentage (i.e., how much the car is outside the lane).

    The simulation data adhere to the following (simplified) JSON Schema and can be interpreted as Python objects using the simulation_data.py module.

    { "$id": "schemas/simulationdata", "type": "object", "properties": { "timer" : { "type": "number" }, "pos" : { "type": "array", "items":{ "$ref" : "schemas/triple" } } "vel" : { "type": "array", "items":{ "$ref" : "schemas/triple" } } "vel_kmh" : { "type": "number" }, "steering" : { "type": "number" }, "brake" : { "type": "number" }, "throttle" : { "type": "number" }, "is_oob" : { "type": "number" }, "oob_percentage" : { "type": "number" } §\label{line:oob-percentage}§ }, "required": [ "timer", "pos", "vel", "vel_kmh", "steering", "brake", "throttle", "is_oob", "oob_percentage" ] }

    Dataset Content

    The TRAVEL dataset is a lively initiative so the content of the dataset is subject to change. Currently, the dataset contains the data collected during the SBST CPS tool competition, and data collected in the context of our recent work on test selection (SDC-Scissor work and tool) and test prioritization (automated test cases prioritization work for SDCs).

    SBST CPS Tool Competition Data

    The data collected during the SBST CPS tool competition are stored inside data/competition.tar.gz. The file contains the test cases generated by Deeper, Frenetic, AdaFrenetic, and Swat, the open-source test generators submitted to the competition and executed against BeamNG.AI with an aggression factor of 0.7 (i.e., conservative driver).

        Name
        Map Size (m x m)
        Max Speed (Km/h)
        Budget (h)
        OOB Tolerance (%)
        Test Subject
    
    
    
    
        DEFAULT
        200 × 200
        120
        5 (real time)
        0.95
        BeamNG.AI - 0.7
    
    
        SBST
        200 × 200
        70
        2 (real time)
        0.5
        BeamNG.AI - 0.7
    

    Specifically, the TRAVEL dataset contains 8 repetitions for each of the above configurations for each test generator totaling 64 experiments.

    SDC Scissor

    With SDC-Scissor we collected data based on the Frenetic test generator. The data is stored inside data/sdc-scissor.tar.gz. The following table summarizes the used parameters.

        Name
        Map Size (m x m)
        Max Speed (Km/h)
        Budget (h)
        OOB Tolerance (%)
        Test Subject
    
    
    
    
        SDC-SCISSOR
        200 × 200
        120
        16 (real time)
        0.5
        BeamNG.AI - 1.5
    

    The dataset contains 9 experiments with the above configuration. For generating your own data with SDC-Scissor follow the instructions in its repository.

    Dataset Statistics

    Here is an overview of the TRAVEL dataset: generated tests, executed tests, and faults found by all the test generators grouped by experiment configuration. Some 25,845 test cases are generated by running 4 test generators 8 times in 2 configurations using the SBST CPS Tool Competition code pipeline (SBST in the table). We ran the test generators for 5 hours, allowing the ego-car a generous speed limit (120 Km/h) and defining a high OOB tolerance (i.e., 0.95), and we also ran the test generators using a smaller generation budget (i.e., 2 hours) and speed limit (i.e., 70 Km/h) while setting the OOB tolerance to a lower value (i.e., 0.85). We also collected some 5, 971 additional tests with SDC-Scissor (SDC-Scissor in the table) by running it 9 times for 16 hours using Frenetic as a test generator and defining a more realistic OOB tolerance (i.e., 0.50).

    Generating new Data

    Generating new data, i.e., test cases, can be done using the SBST CPS Tool Competition pipeline and the driving simulator BeamNG.tech.

    Extensive instructions on how to install both software are reported inside the SBST CPS Tool Competition pipeline Documentation;

  9. a

    Planning Application Polygons

    • geoportal-nottmcitycouncil.opendata.arcgis.com
    • nottingham-city-council-open-data-geoportal-nottmcitycouncil.hub.arcgis.com
    Updated Jul 22, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nccgisteam (2020). Planning Application Polygons [Dataset]. https://geoportal-nottmcitycouncil.opendata.arcgis.com/items/e86eb8d8fe7043818e0672d4aa13b149
    Explore at:
    Dataset updated
    Jul 22, 2020
    Dataset authored and provided by
    nccgisteam
    Area covered
    Description

    Location of planning applications in Nottingham City with information about application dates, proposals, and decisions. This data contains planning applications from the previous 10 years to the present date. This data shows the approximate extent of planning and other applications processed by the council and should be used alongside the Planning Applications Points dataset, as not every planning application is represented by a polygon. The dataset does not form part of the statutory register of applications, is not guaranteed to be complete and application extents may differ from actual application site boundaries. The plotted areas do not relate to land ownership and should not be used in boundary disputes. For actual site boundaries please refer to the relevant application plans at www.nottinghamcity.gov.uk/planningapplicationsIf you're interested in a single planning application, you do not need to download this. Instead, search on Nottingham City Council's public access planning website.The 'CaseURL' field in the ncc_PlanningAplications.csv file contains a link to Nottingham City Council's planning applications public access pages where further information about each application (including a map) can be found. The URL field does not contain a value where: the application has been received but not yet validated, or where the application has been returned, or where the application has been withdrawn, or where the application was invalid on receipt, or where there was insufficient fee, or where no application was required.

  10. C

    Speed ​​Limit Map (RWS)

    • ckan.mobidatalab.eu
    gml, kml, wfs, wms
    Updated Jul 3, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NationaalGeoregisterNL (2023). Speed ​​Limit Map (RWS) [Dataset]. https://ckan.mobidatalab.eu/dataset/maximum-snelhedenkaart-rws
    Explore at:
    kml, wfs, gml, wmsAvailable download formats
    Dataset updated
    Jul 3, 2023
    Dataset provided by
    NationaalGeoregisterNL
    Area covered
    Speed limit
    Description

    The road characteristics database (WKD) for speeds contains speed limits for all roads in the NWB. At the beginning of 2017, WKD was filled for the whole of the Netherlands with data supplied by municipalities. From that moment on, the new traffic decrees, via the Knowledge and Exploitation Center for Official Government Publications (KOOP), have been used to detect and process changes in speed limits. The NWB changes faster than the speed limits are supplied by the road authorities or are placed in KOOP. Algorithms are used to supplement the speed where necessary at short intermediate road sections. As a result, the speed limit is unknown for a few percent. Since 2022, the characteristics Trees, Entrances, Bowl boundaries, Parking points, Parking spaces, Traffic center, Traffic types, Road width, Road categorization and Road narrows have been added to the database as a csv file. These can be downloaded from https://downloads.rijkswaterstaatdata.nl/wkd/. Documentation about the wkd features can also be found here. NB: In residential areas where a maximum speed of 30 km per hour applies, or a residential area, this leads to major deviations from reality. Since 2017, the number of rural roads with a 60 km limit has also increased sharply. The possible speeds that can be entered are 5, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 130 km/h and unknown. The speeds only apply to roads that are open to car traffic. On bicycle paths, footpaths and other roads that are not open to car traffic, the speed is entered as unknown. This also applies to the ferry connections. The file provides variable speed limits with a start time and an end time. These apply in particular to motorways. Outside this period with the indicated start time and end time, an alternative speed will apply. So for example between 06:00 and 19:00 100 km/h applies and outside that time the maximum speed is 120 km/h. The road characteristic database for speeds also contains the recommended speeds that apply to a particular road section or part thereof. More information and news about the NWB can be found at https://nationaalwegendossier.nl/

  11. PLACES: ZCTA Data (GIS Friendly Format), 2022 release

    • data.cdc.gov
    • healthdata.gov
    • +3more
    Updated Jun 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Population Health (2023). PLACES: ZCTA Data (GIS Friendly Format), 2022 release [Dataset]. https://data.cdc.gov/500-Cities-Places/PLACES-ZCTA-Data-GIS-Friendly-Format-2022-release/c76y-7pzg
    Explore at:
    application/rssxml, xml, csv, tsv, kml, application/rdfxml, application/geo+json, kmzAvailable download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    Authors
    Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Division of Population Health
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    This dataset contains model-based ZIP Code Tabulation Area (ZCTA) level estimates for the PLACES 2022 release in GIS-friendly format. PLACES covers the entire United States—50 states and the District of Columbia (DC)—at county, place, census tract, and ZIP Code Tabulation Area levels. It provides information uniformly on this large scale for local areas at 4 geographic levels. Estimates were provided by the Centers for Disease Control and Prevention (CDC), Division of Population Health, Epidemiology and Surveillance Branch. PLACES was funded by the Robert Wood Johnson Foundation in conjunction with the CDC Foundation. Data sources used to generate these model-based estimates include Behavioral Risk Factor Surveillance System (BRFSS) 2020 or 2019 data, Census Bureau 2010 population estimates, and American Community Survey (ACS) 2015–2019 estimates. The 2022 release uses 2020 BRFSS data for 25 measures and 2019 BRFSS data for 4 measures (high blood pressure, taking high blood pressure medication, high cholesterol, and cholesterol screening) that the survey collects data on every other year. These data can be joined with the census 2010 ZCTA boundary file in a GIS system to produce maps for 29 measures at the ZCTA level. An ArcGIS Online feature service is also available for users to make maps online or to add data to desktop GIS software. https://cdcarcgis.maps.arcgis.com/home/item.html?id=3b7221d4e47740cab9235b839fa55cd7

  12. ZIP+4. Complete dataset based on US postal data consisting of plus 35...

    • datarade.ai
    .json, .csv, .txt
    Updated Aug 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geojunxion (2022). ZIP+4. Complete dataset based on US postal data consisting of plus 35 millions of polygons​ [Dataset]. https://datarade.ai/data-products/zip-4-complete-dataset-based-on-us-postal-data-consisting-of-geojunxion
    Explore at:
    .json, .csv, .txtAvailable download formats
    Dataset updated
    Aug 9, 2022
    Dataset provided by
    GeoJunxionhttp://www.geojunxion.com/
    Authors
    Geojunxion
    Area covered
    United States
    Description

    GeoJunxion‘s ZIP+4 is a complete dataset based on US postal data consisting of plus 35 millions of polygons​. The dataset is NOT JUST a table of spot data, which can be downloaded as csv or other text file as it happens with other suppliers​. The data can be delivered as shapefile through a single RAW data delivery or through an API​.

    The January 2021 USPS data source has significantly changed since the previous delivery. Some States have sizably lower ZIP+4 totals across all counties when compared with previous deliveries due to USPS parcelpoint cleanup, while other States have a significant increase in ZIP+4 totals across all counties due to cleanup and other rezoning. California and North Carolina in particular have several new ZIP5s, contributing to the increase in distinct ZIPs and ZIP+4s​.

    GeoJunxion‘s ZIP+4 data can be used as an additional layer on an existing map to run customer or other analysis, e.g. who is my customer who not, what is the density of my customer base in a certain ZIP+4 etc.

    Information can be put into visual context, due to the polygons, which is good for complex overviews or management decisions. ​CRM data can be enriched with the ZIP+4 to have more detailed customer information​.

    Key specifications:

    Topologized ZIP polygons​

    GeoJunxion ZIP+4 polygons follow USPS postal codes ​

    ZIP+4 code polygons: ​

    ZIP5 attributes ​

    State codes. ​

    Overlapping ZIP+4 ​

    boundaries for multiple ZIP+4 addresses on one area​

    Updated USPS source (January 2021) ​

    Distinct ZIP5 codes: 34 731​

    Distinct ZIP+4 codes: 35 146 957 ​

    The ZIP + 4 polygons are delivered in Esri shapefile format. This format allows the storage of geometry and attribute information for each of the features. ​

    The four components for the shapefile data are: ​

    .shp – This file stores the geometry of the feature​

    .shx –This file stores an index that stores the feature geometry​

    .dbf –This file stores attribute information relating to individual features​

    .prj –This file stores projection information associated with features​

    ​Current release version 2021. Earlier versions from previous years available on request.

  13. Coronavirus (Covid-19) Data of United States (USA)

    • kaggle.com
    zip
    Updated Nov 5, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joel Hanson (2020). Coronavirus (Covid-19) Data of United States (USA) [Dataset]. https://www.kaggle.com/joelhanson/coronavirus-covid19-data-in-the-united-states
    Explore at:
    zip(7506633 bytes)Available download formats
    Dataset updated
    Nov 5, 2020
    Authors
    Joel Hanson
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Coronavirus (COVID-19) Data in the United States

    [ U.S. State-Level Data (Raw CSV) | U.S. County-Level Data (Raw CSV) ]

    The New York Times is releasing a series of data files with cumulative counts of coronavirus cases in the United States, at the state and county level, over time. We are compiling this time series data from state and local governments and health departments in an attempt to provide a complete record of the ongoing outbreak.

    Since late January, The Times has tracked cases of coronavirus in real-time as they were identified after testing. Because of the widespread shortage of testing, however, the data is necessarily limited in the picture it presents of the outbreak.

    We have used this data to power our maps and reporting tracking the outbreak, and it is now being made available to the public in response to requests from researchers, scientists, and government officials who would like access to the data to better understand the outbreak.

    The data begins with the first reported coronavirus case in Washington State on Jan. 21, 2020. We will publish regular updates to the data in this repository.

    United States Data

    Data on cumulative coronavirus cases and deaths can be found in two files for states and counties.

    Each row of data reports cumulative counts based on our best reporting up to the moment we publish an update. We do our best to revise earlier entries in the data when we receive new information.

    Both files contain FIPS codes, a standard geographic identifier, to make it easier for an analyst to combine this data with other data sets like a map file or population data.

    Download all the data or clone this repository by clicking the green "Clone or download" button above.

    State-Level Data

    State-level data can be found in the states.csv file. (Raw CSV file here.)

    date,state,fips,cases,deaths
    2020-01-21,Washington,53,1,0
    ...
    

    County-Level Data

    County-level data can be found in the counties.csv file. (Raw CSV file here.)

    date,county,state,fips,cases,deaths
    2020-01-21,Snohomish,Washington,53061,1,0
    ...
    

    In some cases, the geographies where cases are reported do not map to standard county boundaries. See the list of geographic exceptions for more detail on these.

    Methodology and Definitions

    The data is the product of dozens of journalists working across several time zones to monitor news conferences, analyze data releases and seek clarification from public officials on how they categorize cases.

    It is also a response to a fragmented American public health system in which overwhelmed public servants at the state, county and territorial levels have sometimes struggled to report information accurately, consistently and speedily. On several occasions, officials have corrected information hours or days after first reporting it. At times, cases have disappeared from a local government database, or officials have moved a patient first identified in one state or county to another, often with no explanation. In those instances, which have become more common as the number of cases has grown, our team has made every effort to update the data to reflect the most current, accurate information while ensuring that every known case is counted.

    When the information is available, we count patients where they are being treated, not necessarily where they live.

    In most instances, the process of recording cases has been straightforward. But because of the patchwork of reporting methods for this data across more than 50 state and territorial governments and hundreds of local health departments, our journalists sometimes had to make difficult interpretations about how to count and record cases.

    For those reasons, our data will in some cases not exactly match the information reported by states and counties. Those differences include these cases: When the federal government arranged flights to the United States for Americans exposed to the coronavirus in China and Japan, our team recorded those cases in the states where the patients subsequently were treated, even though local health departments generally did not. When a resident of Florida died in Los Angeles, we recorded her death as having occurred in California rather than Florida, though officials in Florida counted her case in their records. And when officials in some states reported new cases without immediately identifying where the patients were being treated, we attempted to add information about their locations later, once it became available.

    • Confirmed Cases

    Confirmed cases are patients who test positive for the coronavirus. We consider a case confirmed when it is reported by a federal, state, territorial or local government agency.

    • Dates

    For each date, we show the cumulative number of confirmed cases and deaths as reported that day in that county or state. All cases and deaths are counted on the date they are first announced.

    • Counties

    In some instances, we report data from multiple counties or other non-county geographies as a single county. For instance, we report a single value for New York City, comprising the cases for New York, Kings, Queens, Bronx and Richmond Counties. In these instances, the FIPS code field will be empty. (We may assign FIPS codes to these geographies in the future.) See the list of geographic exceptions.

    Cities like St. Louis and Baltimore that are administered separately from an adjacent county of the same name are counted separately.

    • “Unknown” Counties

    Many state health departments choose to report cases separately when the patient’s county of residence is unknown or pending determination. In these instances, we record the county name as “Unknown.” As more information about these cases becomes available, the cumulative number of cases in “Unknown” counties may fluctuate.

    Sometimes, cases are first reported in one county and then moved to another county. As a result, the cumulative number of cases may change for a given county.

    Geographic Exceptions

    • New York City

    All cases for the five boroughs of New York City (New York, Kings, Queens, Bronx and Richmond counties) are assigned to a single area called New York City.

    • Kansas City, Mo.

    Four counties (Cass, Clay, Jackson, and Platte) overlap the municipality of Kansas City, Mo. The cases and deaths that we show for these four counties are only for the portions exclusive of Kansas City. Cases and deaths for Kansas City are reported as their line.

    • Alameda, Calif.

    Counts for Alameda County include cases and deaths from Berkeley and the Grand Princess cruise ship.

    • Chicago

    All cases and deaths for Chicago are reported as part of Cook County.

    License and Attribution

    In general, we are making this data publicly available for broad, noncommercial public use including by medical and public health researchers, policymakers, analysts and local news media.

    If you use this data, you must attribute it to “The New York Times” in any publication. If you would like a more expanded description of the data, you could say “Data from The New York Times, based on reports from state and local health agencies.”

    If you use it in an online presentation, we would appreciate it if you would link to our U.S. tracking page at https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html.

    If you use this data, please let us know at covid-data@nytimes.com and indicate if you would be willing to talk to a reporter about your research.

    See our LICENSE for the full terms of use for this data.

    This license is co-extensive with the Creative Commons Attribution-NonCommercial 4.0 International license, and licensees should refer to that license (CC BY-NC) if they have questions about the scope of the license.

    Contact Us

    If you have questions about the data or licensing conditions, please contact us at:

    covid-data@nytimes.com

    Contributors

    Mitch Smith, Karen Yourish, Sarah Almukhtar, Keith Collins, Danielle Ivory, and Amy Harmon have been leading our U.S. data collection efforts.

    Data has also been compiled by Jordan Allen, Jeff Arnold, Aliza Aufrichtig, Mike Baker, Robin Berjon, Matthew Bloch, Nicholas Bogel-Burroughs, Maddie Burakoff, Christopher Calabrese, Andrew Chavez, Robert Chiarito, Carmen Cincotti, Alastair Coote, Matt Craig, John Eligon, Tiff Fehr, Andrew Fischer, Matt Furber, Rich Harris, Lauryn Higgins, Jake Holland, Will Houp, Jon Huang, Danya Issawi, Jacob LaGesse, Hugh Mandeville, Patricia Mazzei, Allison McCann, Jesse McKinley, Miles McKinley, Sarah Mervosh, Andrea Michelson, Blacki Migliozzi, Steven Moity, Richard A. Oppel Jr., Jugal K. Patel, Nina Pavlich, Azi Paybarah, Sean Plambeck, Carrie Price, Scott Reinhard, Thomas Rivas, Michael Robles, Alison Saldanha, Alex Schwartz, Libby Seline, Shelly Seroussi, Rachel Shorey, Anjali Singhvi, Charlie Smart, Ben Smithgall, Steven Speicher, Michael Strickland, Albert Sun, Thu Trinh, Tracey Tully, Maura Turcotte, Miles Watkins, Jeremy White, Josh Williams, and Jin Wu.

    Context

    There's a story behind every dataset and here's your opportunity to share yours.# Coronavirus (Covid-19) Data in the United States

    [ U.S. State-Level Data ([Raw

  14. V

    Third Generation Simulation Data (TGSIM) I-395 Trajectories

    • data.virginia.gov
    • catalog.data.gov
    csv, json, rdf, xsl
    Updated May 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S Department of Transportation (2025). Third Generation Simulation Data (TGSIM) I-395 Trajectories [Dataset]. https://data.virginia.gov/dataset/third-generation-simulation-data-tgsim-i-395-trajectories
    Explore at:
    rdf, json, csv, xslAvailable download formats
    Dataset updated
    May 28, 2025
    Dataset provided by
    Federal Highway Administration
    Authors
    U.S Department of Transportation
    Area covered
    Interstate 395
    Description

    The main dataset is a 232 MB file of trajectory data (I395-final.csv) that contains position, speed, and acceleration data for non-automated passenger cars, trucks, buses, and automated vehicles on an expressway within an urban environment. Supporting files include an aerial reference image (I395_ref_image.png) and a list of polygon boundaries (I395_boundaries.csv) and associated images (I395_lane-1, I395_lane-2, …, I395_lane-6) stored in a folder titled “Annotation on Regions.zip” to map physical roadway segments to the numerical lane IDs referenced in the trajectory dataset. In the boundary file, columns “x1” to “x5” represent the horizontal pixel values in the reference image, with “x1” being the leftmost boundary line and “x5” being the rightmost boundary line, while the column "y" represents corresponding vertical pixel values. The origin point of the reference image is located at the top left corner. The dataset defines five lanes with five boundaries. Lane -6 corresponds to the area to the left of “x1”. Lane -5 corresponds to the area between “x1” and “x2”, and so forth to the rightmost lane, which is defined by the area to the right of “x5” (Lane -2). Lane -1 refers to vehicles that go onto the shoulder of the merging lane (Lane -2), which are manually separated by watching the videos.

    This dataset was collected as part of the Third Generation Simulation Data (TGSIM): A Closer Look at the Impacts of Automated Driving Systems on Human Behavior project. During the project, six trajectory datasets capable of characterizing human-automated vehicle interactions under a diverse set of scenarios in highway and city environments were collected and processed. For more information, see the project report found here: https://rosap.ntl.bts.gov/view/dot/74647. This dataset, which was one of the six collected as part of the TGSIM project, contains data collected from six 4K cameras mounted on tripods, positioned on three overpasses along I-395 in Washington, D.C. The cameras captured distinct segments of the highway, and their combined overlapping and non-overlapping footage resulted in a continuous trajectory for the entire section covering 0.5 km. This section covers a major weaving/mandatory lane-changing between L'Enfant Plaza and 4th Street SW, with three lanes in the eastbound direction and a major on-ramp on the left side. In addition to the on-ramp, the section covers an off-ramp on the right side. The expressway includes one diverging lane at the beginning of the section on the right side and one merging lane in the middle of the section on the left side. For the purposes of data extraction, the shoulder of the merging lane is also considered a travel lane since some vehicles illegally use it as an extended on-ramp to pass other drivers (see I395_ref_image.png for details). The cameras captured continuous footage during the morning rush hour (8:30 AM-10:30 AM ET) on a sunny day. During this period, vehicles equipped with SAE Level 2 automation were deployed to travel through the designated section to capture the impact of SAE Level 2-equipped vehicles on adjacent vehicles and their behavior in congested areas, particularly in complex merging sections. These vehicles are indicated in the dataset.

    As part of this dataset, the following files were provided:

    • I395-final.csv contains the numerical data to be used for analysis that includes vehicle level trajectory data at every 0.1 second. Vehicle type, width, and length are provided with instantaneous location, speed, and acceleration data. All distance measurements (width, length, location) were converted from pixels to meters using the following conversion factor: 1 pixel = 0.3-meter conversion.
    • I395_ref_image.png is the aerial reference image that defines the geographic region and the associated roadway segments.
    • I395_boundaries.csv contains the coordinates that define the roadway segments (n=X). The columns "x1" to "x5" represent the horizontal pi

  15. e

    Geographic Information System of the European Commission (GISCO) - full...

    • sdi.eea.europa.eu
    www:url
    Updated Jun 30, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Environment Agency (2020). Geographic Information System of the European Commission (GISCO) - full database, Jun. 2020 [Dataset]. https://sdi.eea.europa.eu/catalogue/EEA_Reference_Catalogue/api/records/e3d45e69-0bd0-46ff-8f99-5d123ef36636
    Explore at:
    www:urlAvailable download formats
    Dataset updated
    Jun 30, 2020
    Dataset provided by
    European Environment Agency
    License

    http://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1ehttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1e

    Time period covered
    Jan 1, 2016 - Dec 31, 2016
    Area covered
    Earth
    Description

    GISCO (Geographic Information System of the COmmission) is responsible for meeting the European Commission's geographical information needs at three levels: the European Union, its member countries, and its regions.

    In addition to creating statistical and other thematic maps, GISCO manages a database of geographical information, and provides related services to the Commission. Its database contains core geographical data covering the whole of Europe, such as administrative boundaries, and thematic geospatial information, such as population grid data. Some data are available for download by the general public and may be used for non-commercial purposes. For further details and information about any forthcoming new or updated datasets, see http://ec.europa.eu/eurostat/web/gisco/geodata.

    This metadata refers to the whole content of GISCO reference database extracted in June 2020, which contains both public datasets (also available for the general public through http://ec.europa.eu/eurostat/web/gisco/geodata) and datasets to be used only internally by the EEA (typically, but not only, GISCO datasets at 1:100k). The document GISCO-ConditionsOfUse.pdf provided with the dataset gives information on the copyrighted data sources, the mandatory acknowledgement clauses and re-dissemination rights. The license conditions for EuroGeographic datasets in GISCO are provided in a standalone document "LicenseConditions_EuroGeographics.pdf".

    The database is provided in GPKG files, with datasets at scales from 1:60M to 1:100K, with reference years spanning until 2021 (e.g. NUTS 2021). Attribute files are provided in CSV. The database manual, a file with the content of the database, a glossary, and a document with the naming conventions are also provided with the database.

    The main updates with respect to the previous version of the full database in the SDI (from Jul. 2018) are the addition of the following datasets: - Administrative boundaries at country level, 2020 (CNTR_2020) - Administrative boundaries at commune level, 2016 (COMM_2016) - Coastline boundaries, 2016 (COAS_2016) - Exclusive Economic Zones, 2016 (EEZ_2016)

    - Farm Accountancy Data Network based on NUTS 2016, 2018 (FADN_2018)

                 Local Administrative Units, 2018 (LAU_2018)
    
    • Nomenclature of Territorial Units for Statistics, 2021 (NUTS_2021)
    • Political regions (NB.: defined by the Committee of the Regions), 2018 (POLREG_2018)
    • Pan-European Settlements, 2016 (STLL_2016) and 2018 (STLL_2018)
    • Transport Networks (NB.: railway lines, railway stations, roads, road junctions, levelcrossings, ferry routs and custom points), 2019 (TRAN_2019)
    • Urban Audit Areas, 2018 (URAU_2018) and 2020 (URAU_2020)

    NOTE: This metadata file is only for internal EEA purposes and in no case replaces the official metadata provided by Eurostat. For specific GISCO datasets included in this version there are individual EEA metadata files in the SDI: NUTS_2021 and CNTR_2020. For other GISCO datasets in the SDI, it is recommended to use the version included in this dataset. The original metadata files from Eurostat for the different GISCO datasets are available via ECAS login through the Eurostat metadata portal on https://webgate.ec.europa.eu/inspire-sdi/srv/eng/catalog.search#/home. For the public products metadata can also be downloaded from https://ec.europa.eu/eurostat/web/gisco/geodata. For more information about the full database or any of its datasets, please contact the SDI Team (sdi@eea.europa.eu).

  16. A

    The New York Times Coronavirus (Covid-19) Cases and Deaths in the United...

    • data.amerigeoss.org
    csv
    Updated Mar 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UN Humanitarian Data Exchange (2023). The New York Times Coronavirus (Covid-19) Cases and Deaths in the United States [Dataset]. https://data.amerigeoss.org/sl/dataset/nyt-covid-19-data
    Explore at:
    csvAvailable download formats
    Dataset updated
    Mar 30, 2023
    Dataset provided by
    UN Humanitarian Data Exchange
    Area covered
    United States
    Description

    The New York Times is releasing a series of data files with cumulative counts of coronavirus cases in the United States, at the state and county level, over time. We are compiling this time series data from state and local governments and health departments in an attempt to provide a complete record of the ongoing outbreak.

    Since late January, The Times has tracked cases of coronavirus in real time as they were identified after testing. Because of the widespread shortage of testing, however, the data is necessarily limited in the picture it presents of the outbreak.

    We have used this data to power our maps and reporting tracking the outbreak, and it is now being made available to the public in response to requests from researchers, scientists and government officials who would like access to the data to better understand the outbreak.

    The data begins with the first reported coronavirus case in Washington State on Jan. 21, 2020. We will publish regular updates to the data in this repository.

    United States Data

    Data on cumulative coronavirus cases and deaths can be found in two files for states and counties.

    Each row of data reports cumulative counts based on our best reporting up to the moment we publish an update. We do our best to revise earlier entries in the data when we receive new information.

    Both files contain FIPS codes, a standard geographic identifier, to make it easier for an analyst to combine this data with other data sets like a map file or population data.

    State-Level Data

    State-level data can be found in the us-states.csv file.

    date,state,fips,cases,deaths
    2020-01-21,Washington,53,1,0
    ...
    

    County-Level Data

    County-level data can be found in the us-counties.csv file.

    date,county,state,fips,cases,deaths
    2020-01-21,Snohomish,Washington,53061,1,0
    ...
    

    In some cases, the geographies where cases are reported do not map to standard county boundaries. See the list of geographic exceptions for more detail on these.

    Github Repository

    This dataset contains COVID-19 data for the United States of America made available by The New York Times on github at https://github.com/nytimes/covid-19-data

  17. m

    Galilee GW boundary and initial conditions

    • demo.dev.magda.io
    • researchdata.edu.au
    • +2more
    zip
    Updated Dec 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). Galilee GW boundary and initial conditions [Dataset]. https://demo.dev.magda.io/dataset/ds-dga-35ae23a9-d141-4676-8e81-e5feea989c6e
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 4, 2022
    Dataset provided by
    Bioregional Assessment Program
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract The dataset was derived by the Bioregional Assessment Programme. The source dataset is identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. The boundary and initial conditions of the GAL AEM, such as the Belyando River, the extent of the alluvium or Betts Creek bed formation and the mine footprints, are generated as part of dataset GAL_AEM_models as csv-files. …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme. The source dataset is identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. The boundary and initial conditions of the GAL AEM, such as the Belyando River, the extent of the alluvium or Betts Creek bed formation and the mine footprints, are generated as part of dataset GAL_AEM_models as csv-files. This dataset uses the information in these csv-files for visualisation. Dataset History Dataset GAL_AEM models describes how the boundary and initial conditions are generated from their respective source dataset, mostly by coarsely digitising by hand in ArcGIS 10.2. This data sets takes the geographical information contained in these spreadsheets (i.e. the coordinates of polyline or polygon vertices) for visualisation in maps, contained in GAL262 (Peeters et al. 2016). These files include: GAL_Alluvium_Line.csv: Polyline representing the center line of the alluvial deposits GAL_BCB.csv: Polyline representing the eastern extent of the Betts Creek Bed formation GAL_mine_footprints.csv: Polygons representing the mine footprints in the groundwater model GAL_river.csv: Polyline representing the Belyando River in the groundwater model Dataset Citation Bioregional Assessment Programme (2016) Galilee GW boundary and initial conditions. Bioregional Assessment Derived Dataset. Viewed 10 December 2018, http://data.bioregionalassessments.gov.au/dataset/97400f76-ce97-4a54-a7ab-e41710628de2. Dataset Ancestors Derived From Galilee Hydrological Response Variable (HRV) model Derived From Galilee groundwater numerical modelling AEM models Derived From Geoscience Australia GEODATA TOPO series - 1:1 Million to 1:10 Million scale Derived From Surface Geology of Australia, 1:2 500 000 scale, 2012 edition Derived From Galilee model HRV receptors gdb

  18. a

    Florida COVID19 04292021 ByZip CSV

    • hub.arcgis.com
    Updated Apr 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of South Florida GIS (2021). Florida COVID19 04292021 ByZip CSV [Dataset]. https://hub.arcgis.com/datasets/f6277da85579473fa54bf730f5108235
    Explore at:
    Dataset updated
    Apr 30, 2021
    Dataset authored and provided by
    University of South Florida GIS
    Area covered
    Description

    Florida COVID-19 Cases by Zip Code exported from the Florida Department of Health GIS Layer on date seen in file name. Archived by the University of South Florida Libraries, Digital Heritage and Humanities Collections. Contact: LibraryGIS@usf.edu.Please Cite Our GIS HUB. If you are a researcher or other utilizing our Florida COVID-19 HUB as a tool or accessing and utilizing the data provided herein, please provide an acknowledgement of such in any publication or re-publication. The following citation is suggested: University of South Florida Libraries, Digital Heritage and Humanities Collections. 2020. Florida COVID-19 Hub. Available at https://covid19-usflibrary.hub.arcgis.com/.https://doi.org/10.5038/USF-COVID-19-GISLive FDOH Data Source: https://services1.arcgis.com/CY1LXxl9zlJeBuRZ/arcgis/rest/services/Florida_Cases_Zips_COVID19/FeatureServerFor data 5/10/2020 or after: Archived data was exported directly from the live FDOH layer into the archive. For data prior to 5/10/2020: Data was exported by the University of South Florida - Digital Heritage and Humanities Collection using ArcGIS Pro Software. Data was then converted to shapefile and csv and uploaded into ArcGIS Online archive. For data definitions please visit the following box folder: https://usf.box.com/s/vfjwbczkj73ucj19yvwz53at6v6w614hData definition files names include the relative date they were published. The below information was taken from ancillary documents associated with the original layer from FDOH.Q. How is the zip code assigned to a person or case? Cases are counted in a zip code based on residential or mailing address, or by healthcare provider or lab address if other addresses are missing.Q. Why is the city data and the zip code data different? The zip code data is supplied to a healthcare worker, case manager, or lab technician by each individual during intake when a test is first recorded. When entering a zip code, the system we use automatically produces a list of cities within that zip code for the individual to further specify where they live. Sometimes the individual uses the postal city, which may be Miami, when in reality that person lives outside the City of Miami boundaries in the jurisdiction of Coral Gables. Many zip codes contain multiple city/town jurisdictions, and about 20% of zip codes overlap more than one county. Q: How is the Zip Code data calculated and/or shown? If a COUNTY has five or more cases (total): • In zip codes with fewer than 5 cases, the total number of cases is shown as “<5”. • Zip codes with 0 cases in these counties are “0" or "No cases.” • All values of 5 or greater are shown by the actual number of cases in that zip code. If a COUNTY has fewer than five total cases across all of its zip codes, then ALL of the zip codes within that county show the total number of cases as "Suppressed." Q: My zip code says "SUPPRESSED" under cases. What does that mean? IF Suppressed: This county currently has fewer than five cases across all zip codes in the county. In an effort to protect the privacy of our COVID-19-Positive residents, zip code data is only available in counties where five or more cases have been reported. Q: What about PO Box zip codes, or zip codes with letters, like 334MH? PO Box zip codes are not shown in the map. “Filler” zip codes with letters, like 334MH, are typically areas where no or very few people live – like the Florida Everglades, and are shown on the map like any other zip code. Key Data about Cases by Zip Code: ZIP = The zip code COUNTYNAME = The county for the zip code (multi-part counties have been split) ZIPX = The unique county-zip identifier used to pair the data during updates POName = The postal address name assigned to the zip code place_labels = A list of the municipalities intersecting the zip code boundary c_places = The list of cities cases self-reported as being residents of Cases_1 = The number of cases in each zip code, with conditions*LabelY = A calculated field for map display only. All questions regarding this dataset should be directed to the Florida Department of Health.

  19. C

    Speed​​map

    • ckan.mobidatalab.eu
    Updated Oct 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NDW en Rijkswaterstaat (2023). Speed​​map [Dataset]. https://ckan.mobidatalab.eu/dataset/speedmap
    Explore at:
    https://www.iana.org/assignments/media-types/application/gpx+xml, https://www.iana.org/assignments/media-types/application/octet-stream, https://www.iana.org/assignments/media-types/application/vnd.google-earth.kml+xml, https://www.iana.org/assignments/media-types/application/rdf+xml, https://www.iana.org/assignments/media-types/text/plain, https://www.iana.org/assignments/media-types/text/turtle, https://www.iana.org/assignments/media-types/application/xls, https://www.iana.org/assignments/media-types/application/json, https://www.iana.org/assignments/media-types/application/ld+json, https://www.iana.org/assignments/media-types/text/n3, https://www.iana.org/assignments/media-types/application/zip, https://www.iana.org/assignments/media-types/text/csvAvailable download formats
    Dataset updated
    Oct 24, 2023
    Dataset provided by
    NDW en Rijkswaterstaat
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This map contains speed limits for all roads in the National Road Database (NWB).


    Description from Rijkswaterstaat: "Since 2022, the features are Trees, Entrances, Bowl Boundaries, Parking Points , Parking spaces, Traffic center, Traffic types, Road width, Road categorization and Road narrowings added as a csv file to the database."

    "The possible speeds that can be entered are 5, 15, 20, 30 40, 50, 60, 70, 80, 90, 100, 120, 130 km per hour, N/A and unknown. The speeds only apply to roads that are open to car traffic. On cycle paths , footpaths and other roads that are not open to car traffic, the speed is unknown. This also applies to the ferry connections. The file provides variable maximum speeds with a start time and an end time. These mainly apply to motorways. Outside this period with the indicated start time and end time, an alternative speed applies. So, for example, between 6:00 AM and 7:00 PM the speed limit is 100 km per hour and outside of that the maximum speed is 120 km per hour."

    Traffic decisions, via the Knowledge and Operation Center for Official Government Publications (KOOP), are used to detect and process changes in speed limits.


    Disclaimer:

    A number of roads are currently still listed as "unknown" while the speed limit does not actually apply here (pedestrian paths and cycle paths, for example).

    < p>The map may contain inaccuracies. You can report errors via data@eindhoven.nl.


    Source:

    We keep track of speeds within a tool from the National Road Traffic Data Portal (NDW). You can view the map that the NDW offers via: https:// weghouden.ndw.nu/weghouden/wegvakken/323165013/bedrijven/maximumspeed. You can also download the data in shapefile format via https://opendata.ndw.nu/ .

    To unlock the speeds within our Eindhoven Open Data portal we use a service from Rijkswaterstaat: https:// geo.rijkswaterstaat.nl/arcgis/rest/services/GDR/maximum_speeds_roads/FeatureServer/0

    You can obtain more information and different publication formats from the Rijkswaterstaat data source via: https://maps.rijkswaterstaat.nl/dataregister-publicatie/srv/eng/catalog.search#/metadata/ d7df2888-0c0d-40f1-9b35-3c1a01234d01

  20. Mars Dust Activity Database

    • zenodo.org
    application/gzip, bin
    Updated Dec 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joseph Michael Battalio; Joseph Michael Battalio; Huiqun Wang; Mark Richardson; Anthony Toigo; Morgan Saidel; Huiqun Wang; Mark Richardson; Anthony Toigo; Morgan Saidel (2022). Mars Dust Activity Database [Dataset]. http://doi.org/10.5281/zenodo.7480334
    Explore at:
    application/gzip, binAvailable download formats
    Dataset updated
    Dec 28, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Joseph Michael Battalio; Joseph Michael Battalio; Huiqun Wang; Mark Richardson; Anthony Toigo; Morgan Saidel; Huiqun Wang; Mark Richardson; Anthony Toigo; Morgan Saidel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Version 1.1 of the Mars Dust Activity Database (MDAD, v1.1) includes storm boundaries for every instance from the Mars Color Imager (MARCI) era of Version 1.0 of the MDAD (Mars Years 28–32). The dataset is organized into five tar files, one for each Mars year. Two formats are provided in every tar file and provide equivalent data. A netcdf file (*.nc) contains a mask of the dust events on each sol of the Mars Year (soy) in an array called "MDAD," along with a string vector of the name of the sol from the Mars Daily Global Map (MDGM) database in a variable called "dayList," and an array of values in a variable called "timeList" of the time of the sol containing: start soy, end soy, mean soy, median soy, start areocentric longitude (Ls), end Ls, mean Ls, median Ls. The MDAD mask has dimensions time x longitude (3600 points) x latitude (1801 points). The length of the time dimension varies across years: MY 28 (340), MY 29 (551), MY 30 (616), MY 31 (623), MY 32 (358). The dayList and timeList variables have the same time dimension length as the MDAD masks. The mask variable is in uint8 (unsigned byte) format and may be converted directly to unsigned int upon reading. Values of (255) indicate missing pixels of the MDGM. A value of (0) indicates no dust instances. Values ranging 1–133 are the number portion of the member ID to connect the mask back to MDAD v1.0. Values ranging 134–250 are splitting members with the member ID of (mask value - 133)b.

    Each Mars-year bundle also contains a folder of *.csv files, each of which corresponds to a single dust storm instance on a single sol. The csv files are named with the following convention: SUB_dayXX_MEM_ZZZb.csv, where “SUB” is the MARCI mission subphase and XX is the sol number ranging 01–34. The last seven characters are the member ID for the storm from MDAD v1.0, with ZZZ ranging 001–133 and "MEM" the subphase on which the member starts. For storms that have merged, only the member ID with the smallest ZZZ is included in the file name, and the character "b" is only included in the file name for splitting storms. Each *.csv file consists of a two column matrix. Column 1 is east longitude, and column 2 is latitude. Each longitude, latitude point is a pixel on the boundary of the storm instance. The boundaries have a resolution of 1/10°, so each row of the matrix maps to a unique pixel of a MDGM. The total set of longitude, latitude points is the outline of the dust instance, which traces the outside of the storm masks provided in the *.nc files.

    An additional comma separated value file with the name SUB.list provides a copy of the "timeList" array included in the *.nc files. For each subphase of MDGMs, there is a single *.list, and each row is a single day with the following information in csv format: subphase_day, start MY, end MY, start sol-of-year (soy), end soy, mean soy, median soy, start areocentric longitude (Ls), end Ls, mean Ls, median Ls. Additionally, a folder entitled "list" contains an additional *.list comma separated value file for each day of the subphase, where each row describes a single swath in that sol's MDGM. It has the following format: Swath name, Earth time collected, MY, Mars month, soy, Ls, center west longitude.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2024). Data and Results for GIS-Based Identification of Areas that have Resource Potential for Lode Gold in Alaska [Dataset]. https://catalog.data.gov/dataset/data-and-results-for-gis-based-identification-of-areas-that-have-resource-potential-for-lo

Data from: Data and Results for GIS-Based Identification of Areas that have Resource Potential for Lode Gold in Alaska

Related Article
Explore at:
Dataset updated
Jul 6, 2024
Dataset provided by
U.S. Geological Survey
Area covered
Alaska
Description

This data release contains the analytical results and evaluated source data files of geospatial analyses for identifying areas in Alaska that may be prospective for different types of lode gold deposits, including orogenic, reduced-intrusion-related, epithermal, and gold-bearing porphyry. The spatial analysis is based on queries of statewide source datasets of aeromagnetic surveys, Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. LodeGold_Results_gdb.zip - The analytical results in geodatabase polygon feature classes which contain the scores for each source dataset layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for a deposit type within the HUC. The data is described by FGDC metadata. An mxd file, and cartographic feature classes are provided for display of the results in ArcMap. An included README file describes the complete contents of the zip file. 2. LodeGold_Results_shape.zip - Copies of the results from the geodatabase are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file. 3. LodeGold_SourceData_gdb.zip - The source datasets in geodatabase and geotiff format. Data layers include aeromagnetic surveys, AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds. The data is described by FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are the python scripts used to perform the analyses. Users may modify the scripts to design their own analyses. The included README files describe the complete contents of the zip file and explain the usage of the scripts. 4. LodeGold_SourceData_shape.zip - Copies of the geodatabase source dataset derivatives from ARDF and lithology from SIM3340 created for this analysis are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file.

Search
Clear search
Close search
Google apps
Main menu