28 datasets found
  1. d

    Digital data for the Salinas Valley Geological Framework, California

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Digital data for the Salinas Valley Geological Framework, California [Dataset]. https://catalog.data.gov/dataset/digital-data-for-the-salinas-valley-geological-framework-california
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Salinas Valley, Salinas, California
    Description

    This digital dataset was created as part of a U.S. Geological Survey study, done in cooperation with the Monterey County Water Resource Agency, to conduct a hydrologic resource assessment and develop an integrated numerical hydrologic model of the hydrologic system of Salinas Valley, CA. As part of this larger study, the USGS developed this digital dataset of geologic data and three-dimensional hydrogeologic framework models, referred to here as the Salinas Valley Geological Framework (SVGF), that define the elevation, thickness, extent, and lithology-based texture variations of nine hydrogeologic units in Salinas Valley, CA. The digital dataset includes a geospatial database that contains two main elements as GIS feature datasets: (1) input data to the 3D framework and textural models, within a feature dataset called “ModelInput”; and (2) interpolated elevation, thicknesses, and textural variability of the hydrogeologic units stored as arrays of polygonal cells, within a feature dataset called “ModelGrids”. The model input data in this data release include stratigraphic and lithologic information from water, monitoring, and oil and gas wells, as well as data from selected published cross sections, point data derived from geologic maps and geophysical data, and data sampled from parts of previous framework models. Input surface and subsurface data have been reduced to points that define the elevation of the top of each hydrogeologic units at x,y locations; these point data, stored in a GIS feature class named “ModelInputData”, serve as digital input to the framework models. The location of wells used a sources of subsurface stratigraphic and lithologic information are stored within the GIS feature class “ModelInputData”, but are also provided as separate point feature classes in the geospatial database. Faults that offset hydrogeologic units are provided as a separate line feature class. Borehole data are also released as a set of tables, each of which may be joined or related to well location through a unique well identifier present in each table. Tables are in Excel and ascii comma-separated value (CSV) format and include separate but related tables for well location, stratigraphic information of the depths to top and base of hydrogeologic units intercepted downhole, downhole lithologic information reported at 10-foot intervals, and information on how lithologic descriptors were classed as sediment texture. Two types of geologic frameworks were constructed and released within a GIS feature dataset called “ModelGrids”: a hydrostratigraphic framework where the elevation, thickness, and spatial extent of the nine hydrogeologic units were defined based on interpolation of the input data, and (2) a textural model for each hydrogeologic unit based on interpolation of classed downhole lithologic data. Each framework is stored as an array of polygonal cells: essentially a “flattened”, two-dimensional representation of a digital 3D geologic framework. The elevation and thickness of the hydrogeologic units are contained within a single polygon feature class SVGF_3DHFM, which contains a mesh of polygons that represent model cells that have multiple attributes including XY location, elevation and thickness of each hydrogeologic unit. Textural information for each hydrogeologic unit are stored in a second array of polygonal cells called SVGF_TextureModel. The spatial data are accompanied by non-spatial tables that describe the sources of geologic information, a glossary of terms, a description of model units that describes the nine hydrogeologic units modeled in this study. A data dictionary defines the structure of the dataset, defines all fields in all spatial data attributer tables and all columns in all nonspatial tables, and duplicates the Entity and Attribute information contained in the metadata file. Spatial data are also presented as shapefiles. Downhole data from boreholes are released as a set of tables related by a unique well identifier, tables are in Excel and ascii comma-separated value (CSV) format.

  2. t

    1.08 Crash Data Report (detail)

    • data-academy.tempe.gov
    • open.tempe.gov
    • +8more
    Updated Jun 11, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2018). 1.08 Crash Data Report (detail) [Dataset]. https://data-academy.tempe.gov/datasets/1-08-crash-data-report-detail
    Explore at:
    Dataset updated
    Jun 11, 2018
    Dataset authored and provided by
    City of Tempe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Please note that 2024 data are incomplete and will be updated as additional records become available. Data are complete through 12/31/2023.Fatal and serious injury crashes are not “accidents” and are preventable. The City of Tempe is committed to reducing the number of fatal and serious injury crashes to zero. This data page provides details about the performance measure related to High Severity Traffic Crashes as well as access to the data sets and any supplemental data. The Engineering and Transportation Department uses this data to improve safety in Tempe.This data includes vehicle/vehicle, vehicle/bicycle and vehicle/pedestrian crashes in Tempe. The data also includes the type of crash and location. This layer is used in the related Vision Zero story map, web maps and operations dashboard. Time ZonesPlease note that data is stored in Arizona time which is UTC-07:00 (7 hours behind UTC) and does not adjust for daylight savings (as Arizona does not partake in daylight savings). The data is intended to be viewed in Arizona time. Data downloaded as a CSV may appear in UTC time and in some rare circumstances and locations, may display online in UTC or local time zones. As a reference to check data, the record with incident number 2579417 should appear as Jan. 10, 2012 9:04 AM.Please note that 2024 data are incomplete and will be updated as additional records become available. Data are complete through 12/31/2023.This page provides data for the High Severity Traffic Crashes performance measure. The performance measure page is available at 1.08 High Severity Traffic CrashesAdditional InformationSource: Arizona Department of Transportation (ADOT)Contact (author): Shelly SeylerContact (author) E-Mail: Shelly_Seyler@tempe.govContact (maintainer): Julian DresangContact (maintainer) E-Mail: Julian_Dresang@tempe.govData Source Type: CSV files and Excel spreadsheets can be downloaded from ADOT websitePreparation Method: Data is sorted to remove license plate numbers and other sensitive informationPublish Frequency: semi-annuallyPublish Method: ManualData Dictionary

  3. d

    salt storage record table

    • catalog.data.gov
    • anrgeodata.vermont.gov
    • +2more
    Updated Dec 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fluidstate Consulting (2024). salt storage record table [Dataset]. https://catalog.data.gov/dataset/salt-storage-record
    Explore at:
    Dataset updated
    Dec 13, 2024
    Dataset provided by
    Fluidstate Consulting
    Description

    Mapping of deicing material storage facilities in the Lake Champlain Basin was conducted during the late fall and winter of 2022-23. 126 towns were initially selected for mapping (some divisions within the GIS towns data are unincorporated “gores”). Using the list of towns, town clerk contact information was obtained from the Vermont Secretary of State’s website, which maintains a database of contact information for each town.Each town was contacted to request information about their deicing material storage locations and methods. Email and telephone scripts were developed to briefly introduce the project and ask questions about the address of any deicing material storage locations in the town, type of materials stored at each site, duration of time each site has been used, whether materials on site are covered, and the type of surface the materials are stored on, if any. Data were entered into a geospatial database application (Fulcrum). Information was gathered there and exported as ArcGIS file geodatabases and Comma Separated Values (CSV) files for use in Microsoft Excel. Data were collected for 118 towns out of the original 126 on the list (92%). Forty-three (43) towns reported that they are storing multiple materials types at their facilities. Four (4) towns have multiple sites where they store material (Dorset, Pawlet, Morristown, and Castleton). Of these, three (3) store multiple materials at one or both of their sites (Pawlet, Morristown, and Castleton). Where towns have multiple materials or locations, the record information from the overall town identifier is linked to the material stored using a unique ‘one-to-many’ identifier. Locations of deicing material facilities, as shown in the database, were based on the addresses or location descriptions provided by town staff members and was verified only using the most recent aerial imagery (typically later than 2018 for all towns). Locations have not been field verified, nor have site conditions and infrastructure or other information provided by town staff.Dataset instructions:The dataset for Deicing Material Storage Facilities contains two layers – the ‘parent’ records titled ‘salt_storage’ and the ‘child’ records titled ‘salt_storage_record’ with attributes for each salt storage site. This represents a ‘one-to-many’ data structure. To see the attributes for each salt storage site, the user needs to Relate the data. The relationship can be accomplished in GIS software. The Relate needs to be built on the following fields:‘salt_storage’: ‘fulcrum_id’‘salt_storage_record: ‘fulcrum_parent_id’This will create a one-to-many relationship between the geographic locations and the attributes for each salt storage site.

  4. d

    1.08 High Severity Traffic Crashes (summary)

    • catalog.data.gov
    • performance.tempe.gov
    • +8more
    Updated Jan 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2025). 1.08 High Severity Traffic Crashes (summary) [Dataset]. https://catalog.data.gov/dataset/1-08-high-severity-traffic-crashes-summary-d652d
    Explore at:
    Dataset updated
    Jan 17, 2025
    Dataset provided by
    City of Tempe
    Description

    Fatal and serious injury crashes are not “accidents” and are preventable. The City of Tempe is committed to reducing the number of fatal and serious injury crashes to zero. This data page provides details about the performance measure related to High Severity Traffic Crashes as well as access to the data sets and any supplemental data. Click on the Showcases tab for visual representations of this data. The Engineering and Transportation Department uses this data to improve safety in Tempe.This page provides data for the High Severity Traffic Crashes performance measure. City of Tempe crash data summarized to show fatal and serious injury crashes by year.The performance measure dashboard is available at 1.08 High Severity Traffic CrashesAdditional Information Source: Arizona Department of Transportation (ADOT)Contact:  Julian DresangContact E-Mail:  Julian_Dresang@tempe.govData Source Type:  CSV files and Excel spreadsheets can be downloaded from ADOT websitePreparation Method:  Data is sorted to remove license plate numbers and other sensitive informationPublish Frequency:  MonthlyPublish Method:  ManualData Dictionary

  5. a

    TMS daily traffic counts CSV

    • hub.arcgis.com
    • opendata-nzta.opendata.arcgis.com
    Updated Aug 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Waka Kotahi (2020). TMS daily traffic counts CSV [Dataset]. https://hub.arcgis.com/datasets/9cb86b342f2d4f228067a7437a7f7313
    Explore at:
    Dataset updated
    Aug 30, 2020
    Dataset authored and provided by
    Waka Kotahi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    You can also access an API version of this dataset.

    TMS

    (traffic monitoring system) daily-updated traffic counts API

    Important note: due to the size of this dataset, you won't be able to open it fully in Excel. Use notepad / R / any software package which can open more than a million rows.

    Data reuse caveats: as per license.

    Data quality

    statement: please read the accompanying user manual, explaining:

    how

     this data is collected identification 
    
     of count stations traffic 
    
     monitoring technology monitoring 
    
     hierarchy and conventions typical 
    
     survey specification data 
    
     calculation TMS 
    
     operation. 
    

    Traffic

    monitoring for state highways: user manual

    [PDF 465 KB]

    The data is at daily granularity. However, the actual update

    frequency of the data depends on the contract the site falls within. For telemetry

    sites it's once a week on a Wednesday. Some regional sites are fortnightly, and

    some monthly or quarterly. Some are only 4 weeks a year, with timing depending

    on contractors’ programme of work.

    Data quality caveats: you must use this data in

    conjunction with the user manual and the following caveats.

    The

     road sensors used in data collection are subject to both technical errors and 
    
     environmental interference.Data 
    
     is compiled from a variety of sources. Accuracy may vary and the data 
    
     should only be used as a guide.As 
    
     not all road sections are monitored, a direct calculation of Vehicle 
    
     Kilometres Travelled (VKT) for a region is not possible.Data 
    
     is sourced from Waka Kotahi New Zealand Transport Agency TMS data.For 
    
     sites that use dual loops classification is by length. Vehicles with a length of less than 5.5m are 
    
     classed as light vehicles. Vehicles over 11m long are classed as heavy 
    
     vehicles. Vehicles between 5.5 and 11m are split 50:50 into light and 
    
     heavy.In September 2022, the National Telemetry contract was handed to a new contractor. During the handover process, due to some missing documents and aged technology, 40 of the 96 national telemetry traffic count sites went offline. Current contractor has continued to upload data from all active sites and have gradually worked to bring most offline sites back online. Please note and account for possible gaps in data from National Telemetry Sites. 
    

    The NZTA Vehicle

    Classification Relationships diagram below shows the length classification (typically dual loops) and axle classification (typically pneumatic tube counts),

    and how these map to the Monetised benefits and costs manual, table A37,

    page 254.

    Monetised benefits and costs manual [PDF 9 MB]

    For the full TMS

    classification schema see Appendix A of the traffic counting manual vehicle

    classification scheme (NZTA 2011), below.

    Traffic monitoring for state highways: user manual [PDF 465 KB]

    State highway traffic monitoring (map)

    State highway traffic monitoring sites

  6. Z

    RAPID input and output files corresponding to "RAPID Applied to the...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Habets, Florence (2020). RAPID input and output files corresponding to "RAPID Applied to the SIM-France Model" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_30228
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    David, Cédric H.
    Maidment, David R.
    Yang, Zong-Liang
    Habets, Florence
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    France
    Description

    Corresponding peer-reviewed publication

    This dataset corresponds to all the RAPID input and output files that were used in the study reported in:

    David, Cédric H., Florence Habets, David R. Maidment and Zong-Liang Yang (2011), RAPID applied to the SIM-France model, Hydrological Processes, 25(22), 3412-3425. DOI: 10.1002/hyp.8070.

    When making use of any of the files in this dataset, please cite both the aforementioned article and the dataset herein.

    Time format

    The times reported in this description all follow the ISO 8601 format. For example 2000-01-01T16:00-06:00 represents 4:00 PM (16:00) on Jan 1st 2000 (2000-01-01), Central Standard Time (-06:00). Additionally, when time ranges with inner time steps are reported, the first time corresponds to the beginning of the first time step, and the second time corresponds to the end of the last time step. For example, the 3-hourly time range from 2000-01-01T03:00+00:00 to 2000-01-01T09:00+00:00 contains two 3-hourly time steps. The first one starts at 3:00 AM and finishes at 6:00AM on Jan 1st 2000, Universal Time; the second one starts at 6:00 AM and finishes at 9:00AM on Jan 1st 2000, Universal Time.

    Data sources

    The following sources were used to produce files in this dataset:

    The hydrographic network of SIM-France, as published in Habets, F., A. Boone, J. L. Champeaux, P. Etchevers, L. Franchistéguy, E. Leblois, E. Ledoux, P. Le Moigne, E. Martin, S. Morel, J. Noilhan, P. Quintana Seguí, F. Rousset-Regimbeau, and P. Viennot (2008), The SAFRAN-ISBA-MODCOU hydrometeorological model applied over France, Journal of Geophysical Research: Atmospheres, 113(D6), DOI: 10.1029/2007JD008548.

    The observed flows are from Banque HYDRO, Service Central d'Hydrométéorologie et d'Appui à la Prévision des Inondations. Available at http://www.hydro.eaufrance.fr/index.php.

    Outputs from a simulation using SIM-France (Habets et al. 2008). The simulation was run by Florence Habets, and produced 3-hourly time steps from 1995-08-01T00:00+02:00 to 2005-07-31T21:02+00:00. Further details on the inputs and options used for this simulation are provided in David et al. (2011).

    Software

    The following software were used to produce files in this dataset:

    The Routing Application for Parallel computation of Discharge (RAPID, David et al. 2011, http://rapid-hub.org), Version 1.1.0. Further details on the inputs and options used for this series of simulations are provided below and in David et al. (2011).

    ESRI ArcGIS (http://www.arcgis.com).

    Microsoft Excel (https://products.office.com/en-us/excel).

    The GNU Compiler Collection (https://gcc.gnu.org) and the Intel compilers (https://software.intel.com/en-us/intel-compilers).

    Study domain

    The files in this dataset correspond to one study domain:

    The river network of SIM-France is made of 24264 river reaches. The temporal range corresponding to this domain is from 1995-08-01T00:00+02:00 to 2005-07-31 T21:00+02:00.

    Description of files

    All files below were prepared by Cédric H. David, using the data sources and software mentioned above.

    rapid_connect_France.csv. This CSV file contains the river network connectivity information and is based on the unique IDs of the SIM-France river reaches (the IDs). For each river reach, this file specifies: the ID of the reach, the ID of the unique downstream reach, the number of upstream reaches with a maximum of four reaches, and the IDs of all upstream reaches. A value of zero is used in place of NoData. The river reaches are sorted in increasing value of ID. The values were computed based on the SIM-France FICVID file. This file was prepared using a Fortran program.

    m3_riv_France_1995_2005_ksat_201101_c_zvol_ext.nc. This netCDF file contains the 3-hourly accumulated inflows of water (in cubic meters) from surface and subsurface runoff into the upstream point of each river reach. The river reaches have the same IDs and are sorted similarly to rapid_connect_France.csv. The time range for this file is from 1995-08-01T00:00+02:00 to 2005/07/31T21:00+02:00. The values were computed using the outputs of SIM-France. This file was prepared using a Fortran program.

    kfac_modcou_1km_hour.csv. This CSV file contains a first guess of Muskingum k values (in seconds) for all river reaches. The river reaches have the same IDs and are sorted similarly to rapid_connect_France.csv. The values were computed based on the following information: ID, size of the side of the grid cell, Equation (5) in David et al. (2011), and using a wave celerity of 1 km/h. This file was prepared using a Fortran program.

    kfac_modcou_ttra_length.csv. This CSV file contains a second guess of Muskingum k values (in seconds) for all river reaches. The river reaches have the same IDs and are sorted similarly to rapid_connect_France.csv. The values were computed based on the following information: ID, size of the side of the grid cell, travel time, and Equation (9) in David et al. (2011).

    k_modcou_0.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_1.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_2.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_3.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_4.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_a.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_b.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    k_modcou_c.csv. This CSV file contains Muskingum k values (in seconds) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on the following information: kfac_modcou_1km_hour.csv and using Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_0.csv. This CSV file contains Muskingum x values (dimensionless) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_1.csv. This CSV file contains Muskingum x values (dimensionless) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_2.csv. This CSV file contains Muskingum x values (dimensionless) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_3.csv. This CSV file contains Muskingum x values (dimensionless) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_4.csv. This CSV file contains Muskingum x values (dimensionless) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_a.csv. This CSV file contains Muskingum x values (dimensionless) for all river reaches. The river reaches have the same COMIDs and are sorted similarly to rapid_connect_San_Guad.csv. The values were computed based on Table (2) in David et al. (2011). This file was prepared using a Fortran program.

    x_modcou_b.csv. This CSV file contains Muskingum x values

  7. d

    3.07 AZ Merit Data (summary)

    • datasets.ai
    • data.tempe.gov
    • +10more
    15, 21, 3, 8
    Updated Aug 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2024). 3.07 AZ Merit Data (summary) [Dataset]. https://datasets.ai/datasets/3-07-az-merit-data-summary-55307
    Explore at:
    8, 15, 3, 21Available download formats
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    City of Tempe
    Description

    This page provides data for the 3rd Grade Reading Level Proficiency performance measure.


    The dataset includes the student performance results on the English/Language Arts section of the AzMERIT from the Fall 2017 and Spring 2018. Data is representive of students in third grade in public elementary schools in Tempe. This includes schools from both Tempe Elementary and Kyrene districts. Results are by school and provide the total number of students tested, total percentage passing and percentage of students scoring at each of the four levels of proficiency.


    The performance measure dashboard is available at 3.07 3rd Grade Reading Level Proficiency.


    Additional Information

    Source: Arizona Department of Education
    Contact: Ann Lynn DiDomenico
    Contact E-Mail: Ann_DiDomenico@tempe.gov
    Data Source Type: Excel/ CSV
    Preparation Method: Filters on original dataset: within "Schools" Tab School District [select Tempe School District and Kyrene School District]; School Name [deselect Kyrene SD not in Tempe city limits]; Content Area [select English Language Arts]; Test Level [select Grade 3]; Subgroup/Ethnicity [select All Students] Remove irrelevant fields; Add Fiscal Year
    Publish Frequency: Annually as data becomes available
    Publish Method: Manual
    Data Dictionary

  8. V

    Election District 2022 Street Inventory

    • data.virginia.gov
    • transportation-loudoungis.opendata.arcgis.com
    • +7more
    Updated Mar 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Loudoun County (2023). Election District 2022 Street Inventory [Dataset]. https://data.virginia.gov/dataset/election-district-2022-street-inventory
    Explore at:
    arcgis geoservices rest api, kml, geojson, zip, csv, htmlAvailable download formats
    Dataset updated
    Mar 29, 2023
    Dataset provided by
    Loudoun County GIS
    Authors
    Loudoun County
    Description
    The table provides a list of all streets in the county and which election district(s) that they are in. The dataset also includes the MULTIDISTRICT_STREET_FLAG attribution which enables the user to identify which streets fall within more than one election district.

    Table created 3/29/23. Data was created using standard GIS processing tools to determine the geographic location of street centerlines in relation to election districts.

    As the table represents countywide data, it can take time to download. The table can be downloaded as a .csv file and opened in a text editor, spreadsheet, or database (such as Microsoft Excel or Access).
  9. z

    GIS Dataset of Colour and Materials at Het Loo Palace in the Apartments of...

    • zenodo.org
    • data.niaid.nih.gov
    bin, csv
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sara Bernert; Sara Bernert (2024). GIS Dataset of Colour and Materials at Het Loo Palace in the Apartments of William III of Orange-Nassau (1713 Inventory) [Dataset]. http://doi.org/10.5281/zenodo.7990600
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodo
    Authors
    Sara Bernert; Sara Bernert
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract

    The presented datasets concerning the colour choices documented in the 1713 inventory within the two apartments of William III of Orange-Nassau (1652-1702) at Het Loo palace (Apeldoorn) for historical mapping correspond to the article:

    Bernert, Sara. 'Interior Colour Practices in the Apartments of William III of Orange-Nassau at Het Loo Palace. A New Methodology for Digital Reconstruction through the Use of GIS'. In New Digital Approaches, edited by Krista De Jonge and Sanne Maekelberg, 71–88. PALATIUM, 2023.

    This methodology deals with the captivating world of interior colour practices in stately apartments within courtly contexts, using the power of digital humanities. The aim is to bridge historical sources, such as palace inventories, with their corresponding spaces with a Geographic Information System (GIS).

    Please see the PDF for further description.

    Dataset Contents

    The separate CSV sheets and the equivalent Excel workbook[2] include:

    • The datasets of colour carriers, their quantities and described colours by room as noted in the inventory:
      • Overall colour distribution across both apartments:
        • In short form for the macro layer of the palace for an over-regional comparison.[3]
        • And a detailed version of the microlayer of the palace.[4]
    • A separate dataset of the first apartment of William III as it was around 1686, still in his function as Prince of Orange and Stadtholder of Holland, Zeeland, Utrecht, Guelders, and Overijssel.[5]
    • A separate dataset of the second apartment of William III in the state around 1694, as he had already become King of England.[6]
    • Precise data on the polygons of the ground plan from 1695.[7]
    • Furthermore, the key to colour factoring by the space the colour carriers take up within the room, as described in the associated article pp.73-74.[8]

    _

    [1] The inventory was amongst other inventories of the Orange-Nassau dynasty published in the 1970s and is accessible online: Sophie Wilhelmina Albertine Drossaers and Theodoor Herman Lunsingh Scheurleer, eds., ‘Inventaris van de Inboedel van Het Huis Het Loo, Het Oude Loo En Het Huis Merwel 1713’, in Inventarissen van de Inboedels in de Verblijven van de Oranjes En Daarmee Gelijk Te Stellen Stukken 1567-1795, vol. 1, 3 vols, Rijks Geschiedkundige Publicatiën, GS 147 (Den Haag: Rijks Geschiedkundige Publicatiën, 1974), 647–94, https://resources.huygens.knaw.nl/retroboeken/inboedelsoranje/#source=1&page=686&accessor=toc&view=imagePane&size=877.

    [2] Cf. 2023_HetLoo_1713_WilliamIII_Data_Color_Material_Apartments©Bernert2023

    [3] Cf. 2023_HetLoo_1713_WilliamIII_ColourShort_Both_Apartments_©Bernert2023

    [4] Cf. 2023_HetLoo_1713_WilliamIII_Inventory_Long_Both_Apartments_©Bernert2023

    [5] Cf. 2023_HetLoo_1713_WilliamIII_First_Apartment_©Bernert2023

    [6] Cf. 2023_HetLoo_1713_WilliamIII_Second_Apartment_©Bernert2023

    [7] Cf. 2023_HetLoo_1695_GIS_PolygonData_Spaces_©Bernert2023. The map referred to: Anonymous, Het Loo Palace. Goundplan of the First Floor, around 1695, RL-7596, Het Loo Palace Collection.

    [8] Cf. 2023_GIS_Colour_Counting_Sheet_©Bernert2023 and Sara Bernert, ‘Interior Colour Practices in the Apartments of William III of Orange-Nassau at Het Loo Palace. A New Methodology for Digital Reconstruction through the Use of GIS’, in New Digital Approaches, ed. Krista De Jonge and Sanne Maekelberg, PALATIUM, 2023, 73–74.

  10. d

    Ecosystem Extent Domains

    • catalogue.data.govt.nz
    • portal.zero.govt.nz
    • +1more
    Updated Mar 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Ecosystem Extent Domains [Dataset]. https://catalogue.data.govt.nz/dataset/ecosystem-extent-domains
    Explore at:
    Dataset updated
    Mar 3, 2025
    Description

    At present, ArcGIS Online - and therefore Open Data - does not resolve domains for CSV and Excel file formats.This document provides domains for Ecosystem Current and Ecosystem Potential Extent.The domains in this document include:Ecosystem CodeEcosystem NameEcosystem GroupHydrosystemRecord StatusValidation StateIUCN Threat Status

  11. A

    ‘1.08 High Severity Traffic Crashes (summary)’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Feb 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2022). ‘1.08 High Severity Traffic Crashes (summary)’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/data-gov-1-08-high-severity-traffic-crashes-summary-5ea5/fa231351/?iid=002-629&v=presentation
    Explore at:
    Dataset updated
    Feb 11, 2022
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘1.08 High Severity Traffic Crashes (summary)’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/cd29e892-b0a8-4b4e-8b5e-3f81a9df49a2 on 11 February 2022.

    --- Dataset description provided by original source is as follows ---

    Fatal and serious injury crashes are not “accidents” and are preventable. The City of Tempe is committed to reducing the number of fatal and serious injury crashes to zero. This data page provides details about the performance measure related to High Severity Traffic Crashes as well as access to the data sets and any supplemental data. Click on the Showcases tab for visual representations of this data. The Engineering and Transportation Department uses this data to improve safety in Tempe.


    This page provides data for the High Severity Traffic Crashes performance measure.


    City of Tempe crash data summarized to show fatal and serious injury crashes by year.


    The performance measure dashboard is available at 1.08 High Severity Traffic Crashes


    Additional Information


    Source: Arizona Department of Transportation (ADOT)

    Contact:  Julian Dresang

    Contact E-Mail:  Julian_Dresang@tempe.gov

    Data Source Type:  CSV files and Excel spreadsheets can be downloaded from ADOT website

    Preparation Method:  Data is sorted to remove license plate numbers and other sensitive information

    Publish Frequency:  Monthly

    Publish Method:  Manual

    Data Dictionary


    --- Original source retains full ownership of the source dataset ---

  12. C

    GIS Final Project

    • data.cityofchicago.org
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chicago Police Department (2025). GIS Final Project [Dataset]. https://data.cityofchicago.org/Public-Safety/GIS-Final-Project/8n2i-4jmi
    Explore at:
    application/rdfxml, csv, tsv, xml, application/rssxml, kmz, application/geo+json, kmlAvailable download formats
    Dataset updated
    Mar 26, 2025
    Authors
    Chicago Police Department
    Description

    This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. Data is extracted from the Chicago Police Department's CLEAR (Citizen Law Enforcement Analysis and Reporting) system. In order to protect the privacy of crime victims, addresses are shown at the block level only and specific locations are not identified. Should you have questions about this dataset, you may contact the Research & Development Division of the Chicago Police Department at 312.745.6071 or RandD@chicagopolice.org. Disclaimer: These crimes may be based upon preliminary information supplied to the Police Department by the reporting parties that have not been verified. The preliminary crime classifications may be changed at a later date based upon additional investigation and there is always the possibility of mechanical or human error. Therefore, the Chicago Police Department does not guarantee (either expressed or implied) the accuracy, completeness, timeliness, or correct sequencing of the information and the information should not be used for comparison purposes over time. The Chicago Police Department will not be responsible for any error or omission, or for the use of, or the results obtained from the use of this information. All data visualizations on maps should be considered approximate and attempts to derive specific addresses are strictly prohibited. The Chicago Police Department is not responsible for the content of any off-site pages that are referenced by or that reference this web page other than an official City of Chicago or Chicago Police Department web page. The user specifically acknowledges that the Chicago Police Department is not responsible for any defamatory, offensive, misleading, or illegal conduct of other users, links, or third parties and that the risk of injury from the foregoing rests entirely with the user. The unauthorized use of the words "Chicago Police Department," "Chicago Police," or any colorable imitation of these words or the unauthorized use of the Chicago Police Department logo is unlawful. This web page does not, in any way, authorize such use. Data is updated daily Tuesday through Sunday. The dataset contains more than 65,000 records/rows of data and cannot be viewed in full in Microsoft Excel. Therefore, when downloading the file, select CSV from the Export menu. Open the file in an ASCII text editor, such as Wordpad, to view and search. To access a list of Chicago Police Department - Illinois Uniform Crime Reporting (IUCR) codes, go to http://data.cityofchicago.org/Public-Safety/Chicago-Police-Department-Illinois-Uniform-Crime-R/c7ck-438e

  13. Selkie GIS Techno-Economic Tool input datasets

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Margaret Cullinane; Margaret Cullinane (2023). Selkie GIS Techno-Economic Tool input datasets [Dataset]. http://doi.org/10.5281/zenodo.10083961
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Margaret Cullinane; Margaret Cullinane
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Aug 2023
    Description

    This data was prepared as input for the Selkie GIS-TE tool. This GIS tool aids site selection, logistics optimization and financial analysis of wave or tidal farms in the

    Irish and Welsh maritime areas. Read more here:

    https://www.selkie-project.eu/selkie-tools-gis-technoeconomic-model/

    This research was funded by the Science Foundation Ireland (SFI) through MaREI, the SFI Research Centre for Energy, Climate and the Marine and by the Sustainable Energy Authority of Ireland (SEAI). Support was also received from the European Union's European Regional Development Fund through the Ireland Wales Cooperation Programme as part of the Selkie project.

    ********************

    File Formats

    ********************

    Results are presented in three file formats:

    tif Can be imported into a GIS software (such as ARC GIS)

    csv Human-readable text format, which can also be opened in Excel

    png Image files that can be viewed in standard desktop software and give a spatial view of results

    ******************

    Input Data

    ******************

    All calculations use open-source data from the Copernicus store and the open-source software Python. The Python xarray library is used to read the data.

    Hourly Data from 2000 to 2019

    - Wind -

    Copernicus ERA5 dataset

    17 by 27.5 km grid

    10m wind speed

    - Wave -

    Copernicus Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis dataset

    3 by 5 km grid

    *********************

    Accessibility

    *********************

    The maximum limits for Hs and wind speed are applied when mapping the accessibility of a site.

    The Accessibility layer shows the percentage of time the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5) are below these limits for the month.

    Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined by checking if

    the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total number of hours for the month.

    Environmental data is from the Copernicus data store (https://cds.climate.copernicus.eu/). Wave hourly data is from the 'Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis' dataset.

    Wind hourly data is from the ERA 5 dataset.

    ********************

    Availability

    ********************

    A device's availability to produce electricity depends on the device's reliability and the time to repair any failures. The repair time depends on weather

    windows and other logistical factors (for example, the availability of repair vessels and personnel.). A 2013 study by O'Connor et al. determined the

    relationship between the accessibility and availability of a wave energy device. The resulting graph (see Fig. 1 of their paper) shows the correlation between

    accessibility at Hs of 2m and wind speed of 15.0m/s and availability. This graph is used to calculate the availability layer from the accessibility layer.

    The input value, accessibility, measures how accessible a site is for installation or operation and maintenance activities. It is the percentage time the

    environmental conditions, i.e. the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5), are below operational limits.

    Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined

    by checking if the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total

    number of hours for the month. Once the accessibility was known, the percentage availability was calculated using the O'Connor et al. graph of the relationship

    between the two. A mature technology reliability was assumed.

    **********************

    Weather Window

    **********************

    The weather window availability is the percentage of possible x-duration windows where weather conditions (Hs, wind speed) are below maximum limits for the

    given duration for the month.

    The resolution of the wave dataset (0.05° × 0.05°) is higher than that of the wind dataset

    (0.25° x 0.25°), so the nearest wind value is used for each wave data point. The weather window layer is at the resolution of the wave layer.

    The first step in calculating the weather window for a particular set of inputs (Hs, wind speed and duration) is to calculate the accessibility at each timestep.

    The accessibility is based on a simple boolean evaluation: are the wave and wind conditions within the required limits at the given timestep?

    Once the time series of accessibility is calculated, the next step is to look for periods of sustained favourable environmental conditions, i.e. the weather

    windows. Here all possible operating periods with a duration matching the required weather-window value are assessed to see if the weather conditions remain

    suitable for the entire period. The percentage availability of the weather window is calculated based on the percentage of x-duration windows with suitable

    weather conditions for their entire duration.The weather window availability can be considered as the probability of having the required weather window available

    at any given point in the month.

    *****************************

    Extreme Wind and Wave

    *****************************

    The Extreme wave layers show the highest significant wave height expected to occur during the given return period.

    The Extreme wind layers show the highest wind speed expected to occur during the given return period.

    To predict extreme values, we use Extreme Value Analysis (EVA). EVA focuses on the extreme part of the data and seeks to determine a model to fit this reduced

    portion accurately. EVA consists of three main stages. The first stage is the selection of extreme values from a time series. The next step is to fit a model

    that best approximates the selected extremes by determining the shape parameters for a suitable probability distribution. The model then predicts extreme values

    for the selected return period. All calculations use the python pyextremes library. Two methods are used - Block Maxima and Peaks over threshold.

    The Block Maxima methods selects the annual maxima and fits a GEVD probability distribution.

    The peaks_over_threshold method has two variable calculation parameters. The first is the percentile above which values must be to be selected as extreme (0.9 or 0.998). The

    second input is the time difference between extreme values for them to be considered independent (3 days). A Generalised Pareto Distribution is fitted to the selected

    extremes and used to calculate the extreme value for the selected return period.

  14. London Atmospheric Emissions Inventory (LAEI) 2019

    • data.subak.org
    pdf, zip
    Updated Feb 15, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Greater London Authority (2023). London Atmospheric Emissions Inventory (LAEI) 2019 [Dataset]. https://data.subak.org/dataset/london-atmospheric-emissions-inventory-laei-2019
    Explore at:
    zip, pdfAvailable download formats
    Dataset updated
    Feb 15, 2023
    Dataset provided by
    Greater London Authorityhttp://www.london.gov.uk/
    Area covered
    London
    Description

    The LAEI 2019 is the latest version of the London Atmospheric Emissions Inventory and replaces previous versions of the inventory.

    Emissions estimates of key pollutants (NOx, PM10, PM2.5 and CO2) by source type are included for the base year 2019. Emissions for previous years 2013 and 2016 have also been revised, using the latest data sources (emission factors, activity data, ...) where available, and changes in methodology where relevant.

    Emissions projected forward to 2025 and 2030 will be available soon.

    The area covered by the LAEI includes Greater London (the 32 London boroughs and the City of London), as well as areas outside Greater London up to the M25 motorway.

    These emissions have been used to estimate ground level concentrations of key pollutants NOx, NO2, PM10 and PM2.5 across Greater London for year 2019, using an atmospheric dispersion model. Air pollutant concentration maps and associated datasets are available for download.

    Due to the size of the LAEI database, datasets are provided in several parts and provided as ZIP files.

    Documentation

    Supporting Information Key GIS geographies and road traffic flows and vehicle-kilometres for 2019 for each vehicle type. Data are provided in Excel and GIS formats.

    Grid Emissions Summary This dataset includes emissions of key pollutants NOx, PM10, PM2.5and CO2, and a range of additional pollutants (SO2, CH4, VOC...) in tonnes/year for 2013, 2016 and 2019 for each source category at a 1km grid square resolution (further split to follow all London borough boundaries). It includes emission summary tables for London boroughs and London zones (Central / Inner / Outer London). Data are provided in Excel and GIS formats.

    • Update 26 07 2022 : Please note that emission totals have been slightly revised to correct emissions for Rail and Construction NRMM Exhaust (for CO2) and Industrial / Commercial Heat and Power sources (for all pollutants). Please refer to the Emissions - Data - Excel File below for further information.
    • Emissions - Data - Excel Files (.ZIP - 58.5 MB)
    • Emissions - Data - GIS Files (.ZIP - 66.4 MB)
    • Emissions - Summary Dashboards (.ZIP - 338 KB)

    Detailed Road Transport Road transport NOx, PM10, PM2.5and CO2 emissions in 2019 by vehicle type. PM emissions include split by exhaust, break wear and tyre wear. This data is provided at link level for major roads. Data are provided in several GIS formats.

    Concentrations This dataset includes modelled 2019 ground level concentrations of annual mean NOx, NO2, PM10 and PM2.5in µg/m3 (microgramme per cubic metre) at 20m grid resolution. For PM10, it also includes the number of daily means exceeding 50 µg/m3. Data are provided in CSV, GIS (ESRI) and PDF formats.

    Population Exposure These datasets include estimations of the number of Londoners and number of schools, hospitals and care homes in London exposed to an annual average NO2 concentration above the Air Quality Strategy objective of 40µg/m3 and PM2.5 concentration above the interim WHO Guideline of 10µg/m3, based on the modelled 2019 ground level concentrations. A comparison with previous 2016 concentrations modelled for the LAEI 2016 inventory is also provided.

  15. O

    Deep Direct-Use Feasibility Study Tuscarora Sandstone Geophysical Log...

    • data.openei.org
    • gdr.openei.org
    • +2more
    archive, data +1
    Updated Dec 18, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jessica Moore; Jessica Moore (2019). Deep Direct-Use Feasibility Study Tuscarora Sandstone Geophysical Log Digitization [Dataset]. http://doi.org/10.15121/1593278
    Explore at:
    archive, data, text_documentAvailable download formats
    Dataset updated
    Dec 18, 2019
    Dataset provided by
    West Virginia University
    USDOE Office of Energy Efficiency and Renewable Energy (EERE), Multiple Programs (EE)
    Open Energy Data Initiative (OEDI)
    Authors
    Jessica Moore; Jessica Moore
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains well log files collected from wells penetrating the Tuscarora Sandstone, structural geologic map of West Virginia and salinity information based on brine geochemistry in West Virginia and Pennsylvania. A combination of proprietary and free software may be required to view some of the information provided. Software used for data analysis and figure creation include ESRI ArcGIS. For GIS map files, you will have to change the directories of the files to match your computer. LAS files were digitized using IHS Petra software, but may be viewed in Microsoft Notepad, or converted to .csv files in Microsoft Excel.

  16. e

    Zoological Park BA CD76 BP 2019

    • data.europa.eu
    Updated Aug 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Département de la Seine-Maritime (2021). Zoological Park BA CD76 BP 2019 [Dataset]. https://data.europa.eu/data/datasets/https-www-arcgis-com-home-item-html-id-b71b917bb4574d25b0fd7a130301e110-sublayer-0?locale=en
    Explore at:
    geojson, web page, arcgis geoservices rest api, csv, kml, zipAvailable download formats
    Dataset updated
    Aug 10, 2021
    Dataset authored and provided by
    Département de la Seine-Maritime
    License

    https://www.etalab.gouv.fr/licence-ouverte-open-licencehttps://www.etalab.gouv.fr/licence-ouverte-open-licence

    Description

    Primitive budget 2019 of the zoological park of Clères of the Department of Seine-Maritime, voted on December 10, 2018. This is an ancillary budget of the Department (as opposed to the main budget, see glossary in metadata, which is also downloadable from this website). The initial budget shall set out the estimated revenue and expenditure for the financial year to come. It is in the same form as the administrative account, allowing a comparison between forecasts and outputs. The administrative account of the zoological park can also be consulted on this site.

    Metadata

    Link to metadata

    Additional resources

    The State portal for local authorities offers links to the documentation of instruction M52 (including nomenclature) for the current year and previous years (see ‘Archives of the M52’). It also offers for download (in xls and pdf formats) comparison tables between the budgets of the different departments in 2019.

    The tax website provides access to data on the individual accounts of local authorities, including departments. Each departmental dataset offers aggregated presentations of individual accounts as well as information on tax rates, with comparisons to stratum averages.

    The financial and management data portal of the local public sector offers for download in csv, xls and json formats the consolidated accounts of the departments from 2012 to 2022

    The open data portal of the Ministry of Economy, Finance, and Recovery offers downloadable in csv, json and excel formats, different datasets relating to the accounts of the departments.

  17. Datasets for manuscript "COW2NUTRIENT: An environmental GIS-based decision...

    • catalog.data.gov
    • gimi9.com
    • +1more
    Updated Nov 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2021). Datasets for manuscript "COW2NUTRIENT: An environmental GIS-based decision support tool for the assessment of nutrient recovery systems in livestock facilities." [Dataset]. https://catalog.data.gov/dataset/datasets-for-manuscript-cow2nutrient-an-environmental-gis-based-decision-support-tool-for-
    Explore at:
    Dataset updated
    Nov 11, 2021
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    https://github.com/gruizmer/COW2NUTRIENT/tree/master/ToolPaper_DataFiles * These folders supply supporting datasets for the manuscript "COW2NUTRIENT: An environmental GIS-based decision support tool for the assessment of nutrient recovery systems in livestock facilities." * The datasets are recorder as comma-separated values (.csv) and Microsoft Excel® (.xlsx) files. Column data entries have names and units. Some data are about animal facility population and location, amount of nutrient-rich waste generated (kg/yr), amount of nutrient recovered (kg P/yr), installing, capital, and maintenance costs (USD), technologies and their ranking and frequency of being selected for each combination of normalization-aggregation methods, average chlorophyll-a concentration in water in the watershed (ug/L), and average phosphorus concentration in water in the watershed (ug/L). * The folder “Manuscript” has subfolders with datasets for creating manuscript Figures 4, 8, 9, and 10 as well as datasets for Tables 9 and 10. * The folder “Supplementary Material” holds subfolders with datasets for creating Supplementary Material Figures 1-5, 8, 9, 11, and 12. This dataset is associated with the following publication: Martin-Hernandez, E., M. Martin, and G.J. Ruiz-Mercado. A geospatial environmental and techno-economic framework for sustainable phosphorus management at livestock facilities. Resources, Conservation and Recycling. Elsevier Science BV, Amsterdam, NETHERLANDS, 175: 105843, (2021).

  18. Real-Time Sensor Network of Detroit Green Infrastructure: Datasets and Code

    • zenodo.org
    bin, zip
    Updated Sep 13, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brooke Mason; Brooke Mason; Jacquelyn Schmidt; Jacquelyn Schmidt; Branko Kerkez; Branko Kerkez (2023). Real-Time Sensor Network of Detroit Green Infrastructure: Datasets and Code [Dataset]. http://doi.org/10.5281/zenodo.7401897
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Sep 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Brooke Mason; Brooke Mason; Jacquelyn Schmidt; Jacquelyn Schmidt; Branko Kerkez; Branko Kerkez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Detroit
    Description
    1. MonitoredRainGardens_PaperTable.xlsx: Excel spreadsheet where the "Garden" sheet contains the 14 monitored green infrastructure (GI) sites with their design and physiographic features and the "Field Log" sheet contains the installation and field maintenance trips.
    2. WaterWells.xlsx: an Excel file containing all of the water wells in the Detroit region uses for interpolating groundwater levels.
    3. GI_GIS_Analysis.aparx: ArcGIS Pro project file which includes the 14 monitored GI sites and the GIS data for Detroit (percent imperviousness, elevation, slope, land use type, wells, interpolated groundwater levels, hydrologic soil group).
    4. Code.zip: Zip folder containing another folder titled "Code" which holds: (1) a folder titled "SensorData" containing 16 csv files with the raw pressure transducer data for the 16 monitored GI sites during the measurement period (including the two excluded sites); (2) a csv file titled "MonitoredRainGardens.csv" containing the 16 monitored green infrastructure (GI) sites with their design and physiographic features used in the correlation analysis; (3) a csv file titled "storm_constants.csv" which contain the computed decay constants for every storm in every GI during the measurement period; (4) a csv file titled "GLWA_RainGaugesforStudy.csv" that contains rainfall from 9 rain gauges during the measurement period; (5) a Jupyter notebook titled "estimate_decay_constants.ipynb" which provides the code for calculating the decay constants for the monitored GI; (6) a Jupyter notebook titled "storm_constants.ipynb" which provides the code for analyzing the decay constants including the correlation analysis and surface plots; and (7) a Jupiter notebook titled "modeled_response.ipynb" which provides the code for plotting the drawdown curves based on the decay constant.
  19. g

    Adirondack New York vegetation data, 2000-2015 | gimi9.com

    • gimi9.com
    Updated Aug 12, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Adirondack New York vegetation data, 2000-2015 | gimi9.com [Dataset]. https://www.gimi9.com/dataset/data-gov_adirondack-new-york-vegetation-data-2000-2015/
    Explore at:
    Dataset updated
    Aug 12, 2017
    Area covered
    New York, Adirondack Mountains, Adirondack
    Description

    This dataset contains field measurements of vegetation from the (1) Adirondack Sugar Maple Project (ASM), and (2) Buck Creek North and Buck Creek South Watersheds. The ASM data, collected in 2009 in 20 Adirondack watersheds (2 or 3 0.10 ha plots per watershed), are comprised of general plot characteristics, tree species identification and diameter at breast height (DBH) for all trees greater than 10 cm DBH, canopy position and health ratings, common and scientific names, and species identification and counts for saplings and seedlings. In Buck Creek North Tributary Watershed and Buck Creek South Tributary Watershed, near Inlet, New York, all trees greater than 5 cm DBH were identified in 15 circular plots (245 square meters) distributed within each watershed. Measurements were made in 2000, 2005, 2010 and 2015. In all plots, saplings less than 5 cm DBH and taller than 1 meter were also identified to species and counted in 2010 and 2015. Data are posted in EXCEL 2013 and comma-delimited CSV format. Site locations (latitude, longitude) are provided in a downloadable file and dynamic ArcGIS REST service at the USGS Sciencebase DOI link.

  20. g

    Adirondack New York soil chemistry data, 1997-2014

    • gimi9.com
    • search.dataone.org
    • +1more
    Updated Aug 9, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Adirondack New York soil chemistry data, 1997-2014 [Dataset]. https://gimi9.com/dataset/data-gov_adirondack-new-york-soil-chemistry-data-1997-2014
    Explore at:
    Dataset updated
    Aug 9, 2017
    Area covered
    New York, Adirondack Mountains, Adirondack
    Description

    This dataset contains measurements of chemical concentrations of forest soil samples and associated site measurements collected in the Adirondack Ecoregion of New York State. Data are presented in four groups (tabs) in an Microsoft EXCEL 2013 spreadsheet (and comma-delimited CSV files): (1) Adirondack Sugar Maple Project (ASM), (2) Buck Creek North Watershed, (3) Buck Creek South Watershed, and (4) Western Adirondack Stream Survey (WASS) soil sampling. The ASM data were all collected in 2009 and the WASS data were all collected in 2004. The Buck Creek North Tributary Watershed was sampled in 1997 and repeated at the same plot locations in 2009/10. The Buck Creek South Tributary Watershed was sampled in 1998 and repeated at the same plot locations in 2014. Site locations (latitude, longitude) are provided in a downloadable file and dynamic ArcGIS REST service at USGS Sciencebase DOI link.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2024). Digital data for the Salinas Valley Geological Framework, California [Dataset]. https://catalog.data.gov/dataset/digital-data-for-the-salinas-valley-geological-framework-california

Digital data for the Salinas Valley Geological Framework, California

Explore at:
Dataset updated
Jul 6, 2024
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Area covered
Salinas Valley, Salinas, California
Description

This digital dataset was created as part of a U.S. Geological Survey study, done in cooperation with the Monterey County Water Resource Agency, to conduct a hydrologic resource assessment and develop an integrated numerical hydrologic model of the hydrologic system of Salinas Valley, CA. As part of this larger study, the USGS developed this digital dataset of geologic data and three-dimensional hydrogeologic framework models, referred to here as the Salinas Valley Geological Framework (SVGF), that define the elevation, thickness, extent, and lithology-based texture variations of nine hydrogeologic units in Salinas Valley, CA. The digital dataset includes a geospatial database that contains two main elements as GIS feature datasets: (1) input data to the 3D framework and textural models, within a feature dataset called “ModelInput”; and (2) interpolated elevation, thicknesses, and textural variability of the hydrogeologic units stored as arrays of polygonal cells, within a feature dataset called “ModelGrids”. The model input data in this data release include stratigraphic and lithologic information from water, monitoring, and oil and gas wells, as well as data from selected published cross sections, point data derived from geologic maps and geophysical data, and data sampled from parts of previous framework models. Input surface and subsurface data have been reduced to points that define the elevation of the top of each hydrogeologic units at x,y locations; these point data, stored in a GIS feature class named “ModelInputData”, serve as digital input to the framework models. The location of wells used a sources of subsurface stratigraphic and lithologic information are stored within the GIS feature class “ModelInputData”, but are also provided as separate point feature classes in the geospatial database. Faults that offset hydrogeologic units are provided as a separate line feature class. Borehole data are also released as a set of tables, each of which may be joined or related to well location through a unique well identifier present in each table. Tables are in Excel and ascii comma-separated value (CSV) format and include separate but related tables for well location, stratigraphic information of the depths to top and base of hydrogeologic units intercepted downhole, downhole lithologic information reported at 10-foot intervals, and information on how lithologic descriptors were classed as sediment texture. Two types of geologic frameworks were constructed and released within a GIS feature dataset called “ModelGrids”: a hydrostratigraphic framework where the elevation, thickness, and spatial extent of the nine hydrogeologic units were defined based on interpolation of the input data, and (2) a textural model for each hydrogeologic unit based on interpolation of classed downhole lithologic data. Each framework is stored as an array of polygonal cells: essentially a “flattened”, two-dimensional representation of a digital 3D geologic framework. The elevation and thickness of the hydrogeologic units are contained within a single polygon feature class SVGF_3DHFM, which contains a mesh of polygons that represent model cells that have multiple attributes including XY location, elevation and thickness of each hydrogeologic unit. Textural information for each hydrogeologic unit are stored in a second array of polygonal cells called SVGF_TextureModel. The spatial data are accompanied by non-spatial tables that describe the sources of geologic information, a glossary of terms, a description of model units that describes the nine hydrogeologic units modeled in this study. A data dictionary defines the structure of the dataset, defines all fields in all spatial data attributer tables and all columns in all nonspatial tables, and duplicates the Entity and Attribute information contained in the metadata file. Spatial data are also presented as shapefiles. Downhole data from boreholes are released as a set of tables related by a unique well identifier, tables are in Excel and ascii comma-separated value (CSV) format.

Search
Clear search
Close search
Google apps
Main menu