13 datasets found
  1. Data from: 3DHD CityScenes: High-Definition Maps in High-Density Point...

    • data.europa.eu
    • data.niaid.nih.gov
    • +1more
    unknown
    Updated Sep 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zenodo (2022). 3DHD CityScenes: High-Definition Maps in High-Density Point Clouds [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-7085090?locale=de
    Explore at:
    unknown(147082)Available download formats
    Dataset updated
    Sep 15, 2022
    Dataset authored and provided by
    Zenodohttp://zenodo.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview 3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items. Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here. Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises: Python tools to read, generate, and visualize the dataset, 3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection. The DevKit is available here: https://github.com/volkswagen/3DHD_devkit. The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany. When using our dataset, you are welcome to cite: @INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}} Acknowledgements We thank the following interns for their exceptional contributions to our work. Benjamin Sertolli: Major contributions to our DevKit during his master thesis Niels Maier: Measurement campaign for data collection and data preparation The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies. The Dataset After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following. 1. Dataset This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map. During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet. To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example. import json json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data) 2. HD_Map Map items are stored as lists of items in JSON format. In particular, we provide: traffic signs, traffic lights, pole-like objects, construction site locations, construction site obstacles (point-like such as cones, and line-like such as fences), line-shaped markings (solid, dashed, etc.), polygon-shaped markings (arrows, stop lines, symbols, etc.), lanes (ordinary and temporary), relations between elements (only for construction sites, e.g., sign to lane association). 3. HD_Map_MetaData Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON. Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API. 4. HD_PointCloud_Tiles The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows. x-coordinates: 4 byte integer y-coordinates: 4 byte integer z-coordinates: 4 byte integer intensity of reflected beams: 2 byte unsigned integer ground classification flag: 1 byte unsigned integer After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance. import numpy as np import pptk file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['<i4', '<i4', '<i4', '<u2', 'u1'] with open(file_path, "r") as fid: num_points = np.f

  2. US National Cardiovascular Disease

    • kaggle.com
    zip
    Updated Jan 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). US National Cardiovascular Disease [Dataset]. https://www.kaggle.com/datasets/thedevastator/us-national-cardiovascular-disease
    Explore at:
    zip(292792 bytes)Available download formats
    Dataset updated
    Jan 23, 2023
    Authors
    The Devastator
    Description

    US National Cardiovascular Disease

    Indicators and Risk Factors from 2001-Present

    By US Open Data Portal, data.gov [source]

    About this dataset

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    How to use this dataset

    • Accessing the Data: To access this dataset you can visit the website Data.cdc.gov where it is publicly available or download it directly from Kaggle at [https://www.kaggle.com/cdc/us-national-cardiovascular-disease].
    • Exploring the data: There are 20 columns/variables that make up this dataset which include Year, LocationAbbr,LocationDesc,DataSource PriorityArea1 through PriorityArea4,CategoryTopicIndicatorData_Value_TypeData_Value_UnitData_Value_Alt FootnoteSymbol BreakOutCategory GeoLocation etc.(see above for full list). You can explore the data however you want by looking at one variable or multiple variables simultaneously in order to gain insight about CVDs in America such as their rates across different locations over years or prevalence of certain risk factors among different age groups and gender etc . 3 . The Uses of This Dataset: This dataset can be used by researchers who are interested in improving our understanding of CVDs in America through accessing its vital statistics such as assessing disease burden and monitoring trends over time across different population subgroups etc., health authorities attempting to publicize vital health related knowledge via data dissemination tactics such as outreach programs or policy makers who intend on informing community level interventions based upon insights extracted from this powerful tool For example - Someone may look at a comparison between smoking prevalence between males & females within one state countrywide or they could further investigate that comparison into doing a time series analysis looking at smoking prevalence trends since 2001 onwards across both genders nationally until present day

    Research Ideas

    • Creating a real-time cardiovascular disease surveillance system that can send updates and alert citizens about risks in their locale.
    • Generating targeted public health campaigns for different demographic groups by drawing insights from the dataset to reach those most at risk of CVDs.
    • Developing an app or software interface to allow users to visualize data trends around CVD prevalence and risk factors between different locations, age groups and ethnicities quickly, easily and accurately

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    Unknown License - Please check the dataset description for more information.

    Columns

    File: csv-1.csv | Column name | Description | |:-------------------------------|:----------------------------------------------------------------| | Year | Year of the survey. (Integer) | | LocationAbbr | Abbreviation of the location. (String) | | LocationDesc | Description of the location. (String) | | DataSource | Source of the data. (String) | | PriorityArea1 | Priority area 1. (String) | | PriorityArea2 | Priority area 2. (String) | | PriorityArea3 | Priority area 3. (String) | | PriorityArea4 | Priority area 4. (String) | | Category | Category of the data value type. (String) | | Topic | Topic related to the indicator of the data value unit. (String) | | Indicator | Indicator of the data value unit. (String) | | Data_Value_Type | Type of data value. (String) | | Data_Value_Unit | Unit of the data value. (String) | | Data_Value_Alt | Alternative value of the data value. (Float) | | Data_Value_Footnote_Symbol | Footnote symbol of the data value. (String) | | Break_Out_Category | Break out category of the data value. (String) | | GeoLocation | Geographic location associated with the survey d...

  3. Map plotting co-ordinates of repository outputs for Brunel...

    • figshare.com
    xml
    Updated Aug 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Walters (2018). Map plotting co-ordinates of repository outputs for Brunel University(2014-2017) [Dataset]. http://doi.org/10.6084/m9.figshare.5947855.v4
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Aug 13, 2018
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    David Walters
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Visualisation of dataset. This set shows Geolocation data when made available by participating services to the CORE service.This shows OA publications that are affiliated with Brunel university in host locations around the world.

  4. n

    LANDMAP: Satellite Image and and Elevation Maps of the United Kingdom

    • access.earthdata.nasa.gov
    • cmr.earthdata.nasa.gov
    Updated Apr 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). LANDMAP: Satellite Image and and Elevation Maps of the United Kingdom [Dataset]. https://access.earthdata.nasa.gov/collections/C1214611010-SCIOPS
    Explore at:
    Dataset updated
    Apr 21, 2017
    Time period covered
    Jan 1, 1970 - Present
    Area covered
    Description

    [From The Landmap Project: Introduction, "http://www.landmap.ac.uk/background/intro.html"]

     A joint project to provide orthorectified satellite image mosaics of Landsat,
     SPOT and ERS radar data and a high resolution Digital Elevation Model for the
     whole of the UK. These data will be in a form which can easily be merged with
     other data, such as road networks, so that any user can quickly produce a
     precise map of their area of interest.
    
     Predominately aimed at the UK academic and educational sectors these data and
     software are held online at the Manchester University super computer facility
     where users can either process the data remotely or download it to their local
     network.
    
     Please follow the links to the left for more information about the project or
     how to obtain data or access to the radar processing system at MIMAS. Please
     also refer to the MIMAS spatial-side website,
     "http://www.mimas.ac.uk/spatial/", for related remote sensing materials.
    
  5. ECOSTRESS Gridded Top of Atmosphere Calibrated Radiance Instantaneous L1C...

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). ECOSTRESS Gridded Top of Atmosphere Calibrated Radiance Instantaneous L1C Global 70 m V002 - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/ecostress-gridded-top-of-atmosphere-calibrated-radiance-instantaneous-l1c-global-70-m-v002-12f31
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) mission measures the temperature of plants to better understand how much water plants need and how they respond to stress. ECOSTRESS is attached to the International Space Station (ISS) and collects data globally between 52° N and 52° S latitudes. A map of the acquisition coverage can be found on the ECOSTRESS website.The ECOSTRESS Gridded Top of Atmosphere Calibrated Radiance Instantaneous Level 1C Global 70 m (ECO_L1CG_RAD) Version 2 data product provides at-sensor calibrated radiance values retrieved for five thermal infrared (TIR) bands operating between 8 and 12.5 µm. This product is a gridded version of the ECO_L1B_RAD Version 2 data product that has been resampled by nearest neighbor, projected to a globally snapped 0.0006° grid, and repackaged as the ECO_L1CG_RAD data product.The ECO_L1CG_RAD Version 2 data product contains 12 layers distributed in an HDF5 format file containing radiance values for the five TIR bands, associated data quality indicators, and cloud and water masks.Known Issues Data acquisition gap: ECOSTRESS was launched on June 29, 2018, and moved to autonomous science operations on August 20, 2018, following a successful in-orbit checkout period. On September 29, 2018, ECOSTRESS experienced an anomaly with its primary mass storage unit (MSU). ECOSTRESS has a primary and secondary MSU (A and B). On December 5, 2018, the instrument was switched to the secondary MSU and science operations resumed. On March 14, 2019, the secondary MSU experienced a similar anomaly, temporarily halting science acquisitions. On May 15, 2019, a new data acquisition approach was implemented, and science acquisitions resumed. To optimize the new acquisition approach TIR bands 2, 4, and 5 are being downloaded. The data products are as previously, except the bands not downloaded contain fill values (L1 radiance and L2 emissivity). This approach was implemented from May 15, 2019, through April 28, 2023. Data acquisition gap: From February 8 to February 16, 2020, an ECOSTRESS instrument issue resulted in a data anomaly that created striping in band 4 (10.5 micron). These data products have been reprocessed and are available for download. No ECOSTRESS data were acquired on February 17, 2020, due to the instrument being in SAFEHOLD. Data acquired following the anomaly have not been affected. Missing scan data/striping features: During testing, an instrument artifact was encountered in ECOSTRESS bands 1 and 5, resulting in missing values. A machine learning algorithm has been applied to interpolate missing values. For more information on the missing scan filling techniques and outcomes, see Section 3.3.2 of the ECO_L1B_RAD User Guide. Scan overlap: An overlap between ECOSTRESS scans results in a clear line overlap and repeating data. Additional information is available in Section 3.2 of the ECO_L1B_RAD User Guide. Scan flipping: Improvements to the visualization of the data to compensate for instrument orientation are discussed in Section 3.4 of the ECO_L1B_RAD User Guide. Data acquisition: ECOSTRESS has now successfully returned to 5-band mode after being in 3-band mode since 2019. This feature was successfully enabled following a Data Processing Unit firmware update (version 4.1) to the payload on April 28, 2023. To better balance contiguous science data scene variables, 3-band collection is currently being interleaved with 5-band acquisitions over the orbital day/night periods. Solar Array Obstruction: Some ECOSTRESS scenes may be affected by solar array obstructions from the International Space Station (ISS), potentially impacting data quality of obstructed pixels. The 'FieldOfViewObstruction' metadata field is included in all Version 2 products to indicate possible obstructions: * Before October 24, 2024 (orbits prior to 35724): The field is present but was not populated and does not reliably identify affected scenes. * On or after October 24, 2024 (starting with orbit 35724): The field is populated and generally accurate, except for late December 2024, when a temporary processing error may have caused false positives. * A list of scenes confirmed to be affected by obstructions is available and is recommended for verifying historical data (before October 24, 2024) and scenes from late December 2024. The ISS native pointing information is coarse relative to ECOSTRESS pixels, so ECOSTRESS geolocation is improved through image matching with a basemap. Metadata in the L1B_GEO file shows the success of this geolocation improvement, using categorizations "best", "good", "suspect", and "poor". We recommend that users use only "best" and "good" scenes for evaluations where geolocation is important (e.g., comparison to field sites). For some scenes, this metadata is not reflected in the higher-level products (e.g., land surface temperature, evapotranspiration, etc.). While this metadata is always available in the geolocation product, to save users additional download, we have produced a summary text file that includes the geolocation quality flags for all scenes from launch to present. At a later date, all higher-level products will reflect the geolocation quality flag correctly (the field name is GeolocationAccuracyQA).

  6. D

    COVID-19 clinics

    • data.nsw.gov.au
    • researchdata.edu.au
    • +1more
    csv, json
    Updated Feb 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NSW Ministry of Health (2024). COVID-19 clinics [Dataset]. https://data.nsw.gov.au/data/dataset/covid-19-clinics
    Explore at:
    json(158728), csv(38163)Available download formats
    Dataset updated
    Feb 7, 2024
    Dataset authored and provided by
    NSW Ministry of Health
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data is for COVID-19 clinics.

    From 20 October 2023, COVID-19 datasets will no longer be updated. Detailed information is available in the fortnightly NSW Respiratory Surveillance Report: https://www.health.nsw.gov.au/Infectious/covid-19/Pages/reports.aspx.
    Latest national COVID-19 spread, vaccination and treatment metrics are available on the Australian Government Health website: https://www.health.gov.au/topics/covid-19/reporting?language=und

    This dataset provides data on COVID-19 testing and assessment clinics by geolocation, address, contact details, services provided and opening hours.

    This data is subject to change as clinic locations are changed.

    The Government has obligations under the Privacy and Personal Information Protection Act 1998 and the Health Records and Information Privacy Act 2002 in relation to the collection, use and disclosure of the personal, including the health information, of individuals. Information about NSW Privacy laws is available here: https://data.nsw.gov.au/understand-key-data-legislation.

    The information published about COVID-19 clinics does not include any information to directly identify individuals, such as their name, date of birth or address.

    Other governments and private sector bodies also have legal obligations in relation to the protection of personal, including health, information. The Government does not authorise any reproduction or visualisation of the data on this website which includes any representation or suggestion in relation to the personal or health information of any individual. The Government does not endorse or control any third party websites including products and services offered by, from or through those websites or their content.

    For any further enquiries, please contact us at datansw@customerservice.nsw.gov.au

  7. Tweets during Nintendo E3 2018 Conference

    • kaggle.com
    zip
    Updated Jun 14, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xavier (2018). Tweets during Nintendo E3 2018 Conference [Dataset]. https://www.kaggle.com/xvivancos/tweets-during-nintendo-e3-2018-conference
    Explore at:
    zip(62890418 bytes)Available download formats
    Dataset updated
    Jun 14, 2018
    Authors
    Xavier
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    Data set containing Tweets captured during the Nintendo E3 2018 Conference.

    Content

    All Twitter APIs that return Tweets provide that data encoded using JavaScript Object Notation (JSON). JSON is based on key-value pairs, with named attributes and associated values. The JSON file include the following objects and attributes:

    • Tweet - Tweets are the basic atomic building block of all things Twitter. The Tweet object has a long list of ‘root-level’ attributes, including fundamental attributes such as id, created_at, and text. Tweet child objects include user, entities, and extended_entities. Tweets that are geo-tagged will have a place child object.

      • User - Contains public Twitter account metadata and describes the author of the Tweet with attributes as name, description, followers_count, friends_count, etc.

      • Entities - Provide metadata and additional contextual information about content posted on Twitter. The entities section provides arrays of common things included in Tweets: hashtags, user mentions, links, stock tickers (symbols), Twitter polls, and attached media.

      • Extended Entities - All Tweets with attached photos, videos and animated GIFs will include an extended_entities JSON object.

      • Places - Tweets can be associated with a location, generating a Tweet that has been ‘geo-tagged.’

    More information here.

    Acknowledgements

    I used the filterStream() function to open a connection to Twitter's Streaming API, using the keywords #NintendoE3 and #NintendoDirect. The capture started on Tuesday, June 12th 04:00 am UCT and finished on Tuesday, June 12th 05:00 am UCT.

    Inspiration

    • Time analysis
    • Try text mining!
    • Cross-language differences in Twitter
    • Use this data to produce a sentiment analysis
    • Twitter geolocation
    • Network analysis: graph theory, metrics and properties of the network, community detection, network visualization, etc.
  8. Tweets during Real Madrid vs Liverpool

    • kaggle.com
    zip
    Updated May 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xavier (2018). Tweets during Real Madrid vs Liverpool [Dataset]. https://www.kaggle.com/xvivancos/tweets-during-r-madrid-vs-liverpool-ucl-2018
    Explore at:
    zip(224380519 bytes)Available download formats
    Dataset updated
    May 26, 2018
    Authors
    Xavier
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    Data set containing Tweets captured during the 2018 UEFA Champions League Final between Real Madrid and Liverpool.

    Content

    All Twitter APIs that return Tweets provide that data encoded using JavaScript Object Notation (JSON). JSON is based on key-value pairs, with named attributes and associated values. The JSON file include the following objects and attributes:

    • Tweet - Tweets are the basic atomic building block of all things Twitter. The Tweet object has a long list of ‘root-level’ attributes, including fundamental attributes such as id, created_at, and text. Tweet child objects include user, entities, and extended_entities. Tweets that are geo-tagged will have a place child object.

      • User - Contains public Twitter account metadata and describes the author of the Tweet with attributes as name, description, followers_count, friends_count, etc.

      • Entities - Provide metadata and additional contextual information about content posted on Twitter. The entities section provides arrays of common things included in Tweets: hashtags, user mentions, links, stock tickers (symbols), Twitter polls, and attached media.

      • Extended Entities - All Tweets with attached photos, videos and animated GIFs will include an extended_entities JSON object.

      • Places - Tweets can be associated with a location, generating a Tweet that has been ‘geo-tagged.’

    More information here.

    Acknowledgements

    I used the filterStream() function to open a connection to Twitter's Streaming API, using the keyword #UCLFinal. The capture started on Saturday, May 27th 6:45 pm UCT (beginning of the match) and finished on Saturday, May 27th 8:45 pm UCT.

    Inspiration

    • Time analysis
    • Try text mining!
    • Cross-language differences in Twitter
    • Use this data to produce a sentiment analysis
    • Twitter geolocation
    • Network analysis: graph theory, metrics and properties of the network, community detection, network visualization, etc.
  9. n

    Global Seismic Traveltime Database

    • access.earthdata.nasa.gov
    • jamstec.go.jp
    • +1more
    Updated Dec 16, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Global Seismic Traveltime Database [Dataset]. http://doi.org/10.17596/0000107
    Explore at:
    Dataset updated
    Dec 16, 2019
    Time period covered
    Jul 20, 1996 - Present
    Area covered
    Earth
    Description

    We have constructed a data base of seismic traveltimes, which was used to improve our seismic tomography model. We measured absolute arrival times of several seismic phases such as P, PcP, S etc by manual picking and differential travel times between PP and P and differential travel times of P-wave between two stations using waveform cross-correlation method. We use both of broadband and short period waveform data collected from all over the world including ocean bottom seismometers. At present, we have measured totally about 80,000 of various types of traveltimes.

  10. Durham County Food Inspections

    • kaggle.com
    zip
    Updated Jan 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Durham County Food Inspections [Dataset]. https://www.kaggle.com/datasets/thedevastator/durham-county-food-inspections
    Explore at:
    zip(33150589 bytes)Available download formats
    Dataset updated
    Jan 23, 2023
    Authors
    The Devastator
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Area covered
    Durham County
    Description

    Durham County Food Inspections

    Scores, Violations, and Environmental Health Speicalist Observations

    By City and County of Durham Data [source]

    About this dataset

    This dataset contains information about food health inspections in Durham County, North Carolina. The data is gathered from multiple sources and will help you gain insight into how food safety is handled and monitored within the state. This dataset includes the score given to the establishment, comments made by inspectors, violations noted during inspection, number of repeat violations found during an inspection, amount of time taken for inspection and more! With this data at your disposal, you can make informed decisions on improving food safety standards in your local area

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset contains information about food health inspections in Durham County, North Carolina. The data includes inspection scores, comments and violations from multiple establishments. This dataset can be used to gain insights into the sanitation standards across food service establishments in Durham County.

    To get started with this dataset, first take a look at the columns listed above and become familiar with the field names. Once you have familiarized yourself with the data fields, you can begin exploring by filtering the data according to certain criteria (such as county or establishment type). You can also use visualization software such as Tableau to create charts and visualizations of your findings. Additionally, you can run calculations on individual fields or build queries that compare different metrics across multiple fields.

    Finally, after exploring the data and analyzing it further to answer more specific questions, you can share your insights publicly or even publish research papers based on your acquired knowledge gained by working with this dataset!

    Research Ideas

    • Using geolocation data to identify local health inspection trends in the region.
    • Analyzing the correlation between inspection scores, violations found, and comments made by inspectors to determine what types of establishments make for a successful inspection.
    • Predicting the type of establishment based on certain criteria such as seating capacity, water temp sum,travel time etc and comparing those predictions with the actual type of establishment for accuracy checks

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: Open Database License (ODbL) v1.0 - You are free to: - Share - copy and redistribute the material in any medium or format. - Adapt - remix, transform, and build upon the material for any purpose, even commercially. - You must: - Give appropriate credit - Provide a link to the license, and indicate if changes were made. - ShareAlike - You must distribute your contributions under the same license as the original. - Keep intact - all notices that refer to this license, including copyright notices. - No Derivatives - If you remix, transform, or build upon the material, you may not distribute the modified material. - No additional restrictions - You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

    Columns

    File: food-health-inspections_3.csv | Column name | Description | |:-----------------------------|:--------------------------------------------------------------------------------------| | score_sum | The total score received for an inspection. (Integer) | | end_inspection_ampm | The time of day the inspection ended (AM or PM). (String) | | comments | Comments made by the inspector during the inspection. (String) | | seats | The number of seats in the establishment. (Integer) | | com_num | The unique identifier for the establishment. (Integer) | | premise_name | The name of the establishment. (String) | | delete_mark | A flag indicating if the record has been deleted. (Boolean) ...

  11. n

    Satellite Navigation data over the Southern Hemisphere

    • access.earthdata.nasa.gov
    • cmr.earthdata.nasa.gov
    Updated Apr 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Satellite Navigation data over the Southern Hemisphere [Dataset]. https://access.earthdata.nasa.gov/collections/C1214605524-SCIOPS
    Explore at:
    Dataset updated
    Apr 21, 2017
    Time period covered
    Feb 2, 2003 - Present
    Area covered
    Southern Hemisphere,
    Description

    The AMRC has been archiving the southern hemisphere satellite navigation since 2003 in the ftp archive. Products are still be made in real-time on the AMRC website.

  12. n

    Data from: Final GPS Earth Rotation Parameters

    • access.earthdata.nasa.gov
    • cmr.earthdata.nasa.gov
    Updated Apr 25, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Final GPS Earth Rotation Parameters [Dataset]. https://access.earthdata.nasa.gov/collections/C1214585979-SCIOPS
    Explore at:
    Dataset updated
    Apr 25, 2017
    Time period covered
    Jan 1, 1994 - Apr 22, 2006
    Area covered
    Earth
    Description

    Final GPS Earth Rotation Parameters (homogenesouly reprocessed with Bernese GPS Software by TU Munich and TU Dresden).

  13. n

    Global Positioning System Ground Control Points Acquired 1995 for the Forest...

    • access.earthdata.nasa.gov
    • cmr.earthdata.nasa.gov
    Updated Apr 21, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Global Positioning System Ground Control Points Acquired 1995 for the Forest Ecosystem Dynamics Project Spatial Data Archive [Dataset]. https://access.earthdata.nasa.gov/collections/C1214603716-SCIOPS
    Explore at:
    Dataset updated
    Apr 21, 2017
    Time period covered
    Jan 1, 1995 - Jan 30, 1995
    Area covered
    Description

    Forest Ecosystem Dynamics (FED) Project Spatial Data Archive: Global Positioning System Ground Control Points and Field Site Locations from 1995

    The Biospheric Sciences Branch (formerly Earth Resources Branch) within the Laboratory for Terrestrial Physics at NASA's Goddard Space Flight Center and associated University investigators are involved in a research program entitled Forest Ecosystem Dynamics (FED) which is fundamentally concerned with vegetation change of forest ecosystems at local to regional spatial scales (100 to 10,000 meters) and temporal scales ranging from monthly to decadal periods (10 to 100 years). The nature and extent of the impacts of these changes, as well as the feedbacks to global climate, may be addressed through modeling the interactions of the vegetation, soil, and energy components of the boreal ecosystem.

    The Howland Forest research site lies within the Northern Experimental Forest of International Paper. The natural stands in this boreal-northern hardwood transitional forest consist of spruce-hemlock-fir, aspen-birch, and hemlock-hardwood mixtures. The topography of the region varies from flat to gently rolling, with a maximum elevation change of less than 68 m within 10 km. Due to the region's glacial history, soil drainage classes within a small area may vary widely, from well drained to poorly drained. Consequently, an elaborate patchwork of forest communities has developed, supporting exceptional local species diversity.

    This data set is in ARC/INFO export format and contains Global Positioning Systems (GPS) ground control points in and around the International Paper Experimental Forest, Howland ME.

    A Trimble roving receiver placed on the top of the cab of a pick-up truck and leveled was used to collect position information at selected sites (road intersections) across the FED project study area. The field collected data was differentially corrected using base files measured by a Trimble Community Base Station. The Community Base Station is run by the Forestry Department at the University of Maine, Orono (UMO). The base station was surveyed by the Surveying Engineering Department at UMO using classical geodetic methods. Trimble software was used to produce coordinates in Universal Transverse Mercator (UTM) WGS84. Coordinates were adjusted based on field notes. All points were collected during January 1995 and differentially corrected.

  14. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Zenodo (2022). 3DHD CityScenes: High-Definition Maps in High-Density Point Clouds [Dataset]. https://data.europa.eu/data/datasets/oai-zenodo-org-7085090?locale=de
Organization logo

Data from: 3DHD CityScenes: High-Definition Maps in High-Density Point Clouds

Related Article
Explore at:
unknown(147082)Available download formats
Dataset updated
Sep 15, 2022
Dataset authored and provided by
Zenodohttp://zenodo.org/
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Overview 3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items. Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here. Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises: Python tools to read, generate, and visualize the dataset, 3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection. The DevKit is available here: https://github.com/volkswagen/3DHD_devkit. The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany. When using our dataset, you are welcome to cite: @INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}} Acknowledgements We thank the following interns for their exceptional contributions to our work. Benjamin Sertolli: Major contributions to our DevKit during his master thesis Niels Maier: Measurement campaign for data collection and data preparation The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies. The Dataset After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following. 1. Dataset This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map. During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet. To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example. import json json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data) 2. HD_Map Map items are stored as lists of items in JSON format. In particular, we provide: traffic signs, traffic lights, pole-like objects, construction site locations, construction site obstacles (point-like such as cones, and line-like such as fences), line-shaped markings (solid, dashed, etc.), polygon-shaped markings (arrows, stop lines, symbols, etc.), lanes (ordinary and temporary), relations between elements (only for construction sites, e.g., sign to lane association). 3. HD_Map_MetaData Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON. Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API. 4. HD_PointCloud_Tiles The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows. x-coordinates: 4 byte integer y-coordinates: 4 byte integer z-coordinates: 4 byte integer intensity of reflected beams: 2 byte unsigned integer ground classification flag: 1 byte unsigned integer After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance. import numpy as np import pptk file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['<i4', '<i4', '<i4', '<u2', 'u1'] with open(file_path, "r") as fid: num_points = np.f

Search
Clear search
Close search
Google apps
Main menu