11 datasets found
  1. Autonomous Vehicle Survey of Bicyclists and Pedestrians in Pittsburgh

    • data.wprdc.org
    • datasets.ai
    • +1more
    csv, html
    Updated Jun 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BikePGH (2024). Autonomous Vehicle Survey of Bicyclists and Pedestrians in Pittsburgh [Dataset]. https://data.wprdc.org/dataset/autonomous-vehicle-survey-of-bicyclists-and-pedestrians
    Explore at:
    html, csv, csv(4235), csv(143937)Available download formats
    Dataset updated
    Jun 9, 2024
    Dataset provided by
    Bike Pittsburgh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Pittsburgh
    Description

    In Pittsburgh, Autonomous Vehicle (AV) companies have been testing autonomous vehicles since September 2016. However, the tech is new, and there have been some high-profile behavior that we believe warrants a larger conversation. So in early 2017, we set out to design a survey to see both how BikePGH donor-members, and Pittsburgh residents at large, feel about about sharing the road with AVs as a bicyclist and/or as a pedestrian. Our survey asked participants how they feel about being a fellow road user with AVs, either walking or biking. We also wanted to collect stories about people’s experiences interacting with this nascent technology. We are unaware of any public surveys about people’s feelings or understanding of this new technology. We hope that our results will help add to the body of data and help the public and politicians understand the complexity of possible futures that different economic models AV technology can bring to our cities and towncenters.

    We conducted our 2017 survey in two parts. First, we launched the survey exclusively to donor-members, yielding 321 responses (out of 2,900) via email. Once we closed the survey, we launched it again, but allowed the general public to take it. Through promoting it on our website, social media channels, and a few news articles, we yielded 798 responses (mostly from people in the Pittsburgh region), for a combined total of 1,119 responses.

    Regarding the 2019 survey: In total, 795 people responded. BikePGH solicited responses from their blog, website, and email list. There were also a few local news articles about the survey. While many questions were kept similar to the 2017 survey, BikePGH wanted to dig a bit deeper into regulations as well as demographics this time around.

    The 2019 follow up survey also aims to see how the landscape has changed, and how specifically, Pittsburghers on bike and on foot feel about sharing the road with AVs so that we’re all better prepared to deal with this new reality and help make sure that it is introduced as safely as humanly possible.

  2. P

    nuScenes Dataset

    • paperswithcode.com
    • opendatalab.com
    Updated Apr 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Holger Caesar; Varun Bankiti; Alex H. Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom (2021). nuScenes Dataset [Dataset]. https://paperswithcode.com/dataset/nuscenes
    Explore at:
    Dataset updated
    Apr 21, 2021
    Authors
    Holger Caesar; Varun Bankiti; Alex H. Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom
    Description

    The nuScenes dataset is a large-scale autonomous driving dataset. The dataset has 3D bounding boxes for 1000 scenes collected in Boston and Singapore. Each scene is 20 seconds long and annotated at 2Hz. This results in a total of 28130 samples for training, 6019 samples for validation and 6008 samples for testing. The dataset has the full autonomous vehicle data suite: 32-beam LiDAR, 6 cameras and radars with complete 360° coverage. The 3D object detection challenge evaluates the performance on 10 classes: cars, trucks, buses, trailers, construction vehicles, pedestrians, motorcycles, bicycles, traffic cones and barriers.

  3. P

    DAWN Dataset

    • paperswithcode.com
    • opendatalab.com
    • +1more
    Updated Nov 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mourad A. Kenk; Mahmoud Hassaballah (2024). DAWN Dataset [Dataset]. https://paperswithcode.com/dataset/dawn
    Explore at:
    Dataset updated
    Nov 4, 2024
    Authors
    Mourad A. Kenk; Mahmoud Hassaballah
    Description

    DAWN emphasizes a diverse traffic environment (urban, highway and freeway) as well as a rich variety of traffic flow. The DAWN dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms. The dataset is annotated with object bounding boxes for autonomous driving and video surveillance scenarios. This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.

  4. f

    DataSheet10_Deviant Behavior of Pedestrians: A Risk Gamble or Just Against...

    • figshare.com
    txt
    Updated Jun 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hatice Şahin; Sebastian Hemesath; Susanne Boll (2023). DataSheet10_Deviant Behavior of Pedestrians: A Risk Gamble or Just Against Automated Vehicles? How About Social Control?.CSV [Dataset]. http://doi.org/10.3389/frobt.2022.885319.s002
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 17, 2023
    Dataset provided by
    Frontiers
    Authors
    Hatice Şahin; Sebastian Hemesath; Susanne Boll
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recent evidence suggests that the assumed conflict-avoidant programming of autonomous vehicles will incentivize pedestrians to bully them. However, this frequent argument disregards the embedded nature of social interaction. Rule violations are socially sanctioned by different forms of social control, which could moderate the rational incentive to abuse risk-avoidant vehicles. Drawing on a gamified virtual reality (VR) experiment (n = 36) of urban traffic scenarios, we tested how vehicle type, different forms of social control, and monetary benefit of rule violations affect pedestrians’ decision to jaywalk. In a second step, we also tested whether differences in those effects exist when controlling for the risk of crashes in conventional vehicles. We find that individuals do indeed jaywalk more frequently when faced with an automated vehicle (AV), and this effect largely depends on the associated risk and not their automated nature. We further show that social control, especially in the form of formal traffic rules and norm enforcement, can reduce jaywalking behavior for any vehicle. Our study sheds light on the interaction dynamics between humans and AVs and how this is influenced by different forms of social control. It also contributes to the small gamification literature in this human–computer interaction.

  5. f

    Aerial Surveying Routing dataset

    • brunel.figshare.com
    • figshare.com
    Updated Sep 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ivars Dzalbs; Tatiana Kalganova; Tony Grichnik (2020). Aerial Surveying Routing dataset [Dataset]. http://doi.org/10.6084/m9.figshare.12770177.v3
    Explore at:
    Dataset updated
    Sep 30, 2020
    Dataset provided by
    figshare
    Authors
    Ivars Dzalbs; Tatiana Kalganova; Tony Grichnik
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset for aerial surveying routing problem. Consists of 11 base stations, 10 types of aircraft and 12 tasks that need to be visited. Each aircraft type has cruise speed, range and cost per hour specified.TaskToTask, BaseToTask and BaseToBase tables provide the distances in miles between two edges in the graph.AircraftsAvailable table provides information on how many aircraft are available at each base for the given aircraft type.Due to the size or type of the aircraft, not all base stations can support all aircraft types safely, so additional aircraft type constraints are applied for each base station - AircraftBaseConstraints.

  6. Z

    Dreams4Cars Experimental data from Autonomous Test Vehicle

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yüksel, Mehmed (2020). Dreams4Cars Experimental data from Autonomous Test Vehicle [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3582952
    Explore at:
    Dataset updated
    May 12, 2020
    Dataset provided by
    Da Lio, Mauro
    Berghöfer, Elmar
    Yüksel, Mehmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Horizon 2020 project Dreams4Cars (www.dreams4cars.eu) has developed dream-like (offline) learning methods to be used for the development of Autonomous Driving and –more in general– as mechanisms to increase the Cognition abilities and Autonomy of robots. The purpose of dreamlike learning in Dreams4Cars is to deal with (possibly rare) dangerous events synthetizing correct behaviour and control without needing to experience the events, and more efficiently than via straightforward trial and errors. That is, to discover potential threats before they actually happen and prepare appropriate action strategies in advance.

    During the 3-years development process the project has collected and processed a wealth of experimental data from autonomous test vehicles. Parts of these data and advice how to use these data are made available to the public.

    The datasets and how they can be accessed is described in the attached report (project deliverable D5.5 Section 2), the datasets are provided in the ZIP-file.

    Purpose of the Dataset

    The data provided here have the purpose of demonstrating learning of forward models (the first building block of mental imagery and dreams). There are two sets of data: one for the lateral dynamics and another for the longitudinal dynamics. Each dataset has its own example of training of the corresponding forward model). Then following paper provides additional theoretical aspects: M. Da Lio, D. Bortoluzzi, e G. P. Rosati Papini, «Modelling longitudinal vehicle dynamics with neural networks», Vehicle System Dynamics, pagg. 1–19, lug. 2019, doi: 10.1080/00423114.2019.1638947

    Contacts:

    Mauro Da Lio, University of Trento, mauro.dalio@unitn.it

    Elmar Berghoefer, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Elmar.Berghoefer@dfki.de

    Mehmed Yueksel, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Mehmed.Yueksel@dfki.de

  7. I

    ru22-20140825T1505

    • data.ioos.us
    • gliders.ioos.us
    • +2more
    erddap +2
    Updated Mar 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Glider DAC (2025). ru22-20140825T1505 [Dataset]. https://data.ioos.us/dataset/ru22-20140825t1505
    Explore at:
    erddap, erddap-tabledap, opendapAvailable download formats
    Dataset updated
    Mar 14, 2025
    Dataset authored and provided by
    Glider DAC
    Description

    The project is a comprehensive observational and analytical program to examine the dynamics and source waters of the relaxation flows in a coastal upwelling system on the central California coast. Autonomous vehicles, high-frequency radars, moorings, and drifters, will be used to acquire pressure, density, and velocity data relevant to the relaxation flows. The data will be used to determine spatial scales of the flows, cross-shore density structure, cross-shore and alongshore velocity fields, pressure gradients, and the region of contact with the sea floor. Aspects of the research include: 1) to evaluate the roles of barotropic and baroclinic pressure gradient forcing, 2) to identify regions where ageostrophic flows dominate the cross-shore and alongshore momentum balances, 3) to determine source waters for the relaxation flows, and 4) to examine the inner shelf circulation response to wind relaxations over an extensive coastal region (the northern part of the Southern California Bight) by analyzing extensive regional data sets collected over many years.

  8. n

    Supplementary data for the paper 'Predicting perceived risk of traffic...

    • 4tu.edu.hpc.n-helix.com
    • data.4tu.nl
    • +1more
    zip
    Updated Jan 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joost de Winter; Jim Hoogmoed; J.C.J. (Jork) Stapel; Dimitra Dodou; Pavlo Bazilinskyy (2023). Supplementary data for the paper 'Predicting perceived risk of traffic scenes using computer vision' [Dataset]. http://doi.org/10.4121/21952685.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 27, 2023
    Dataset provided by
    4TU.ResearchData
    Authors
    Joost de Winter; Jim Hoogmoed; J.C.J. (Jork) Stapel; Dimitra Dodou; Pavlo Bazilinskyy
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Description

    Perceived risk, or subjective risk, is an important concept in the field of traffic psychology and automated driving. In this paper, we investigate whether perceived risk in images of traffic scenes can be predicted from computer vision features that may also be used by automated vehicles (AVs). We conducted an international crowdsourcing study with 1378 participants, who rated the perceived risk of 100 randomly selected dashcam images on German roads. The population-level perceived risk was found to be statistically reliable, with a split-half reliability of 0.98. We used linear regression analysis to predict (r = 0.62) perceived risk from two features obtained with the YOLOv4 computer vision algorithm: the number of people in the scene and the mean size of the bounding boxes surrounding other road users. When the ego-vehicle’s speed was added as a predictor variable, the prediction strength increased to r = 0.75. Interestingly, the sign of the speed prediction was negative, indicating that a higher vehicle speed was associated with a lower perceived risk. This finding aligns with the principle of self-explaining roads. Our results suggest that computer-vision features and vehicle speed contribute to an accurate prediction of population subjective risk, outperforming the ratings provided by individual participants (mean r = 0.41). These findings may have implications for AV development and the modeling of psychological constructs in traffic psychology.

  9. f

    DataSheet11_Deviant Behavior of Pedestrians: A Risk Gamble or Just Against...

    • frontiersin.figshare.com
    txt
    Updated Jun 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hatice Şahin; Sebastian Hemesath; Susanne Boll (2023). DataSheet11_Deviant Behavior of Pedestrians: A Risk Gamble or Just Against Automated Vehicles? How About Social Control?.CSV [Dataset]. http://doi.org/10.3389/frobt.2022.885319.s003
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 14, 2023
    Dataset provided by
    Frontiers
    Authors
    Hatice Şahin; Sebastian Hemesath; Susanne Boll
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recent evidence suggests that the assumed conflict-avoidant programming of autonomous vehicles will incentivize pedestrians to bully them. However, this frequent argument disregards the embedded nature of social interaction. Rule violations are socially sanctioned by different forms of social control, which could moderate the rational incentive to abuse risk-avoidant vehicles. Drawing on a gamified virtual reality (VR) experiment (n = 36) of urban traffic scenarios, we tested how vehicle type, different forms of social control, and monetary benefit of rule violations affect pedestrians’ decision to jaywalk. In a second step, we also tested whether differences in those effects exist when controlling for the risk of crashes in conventional vehicles. We find that individuals do indeed jaywalk more frequently when faced with an automated vehicle (AV), and this effect largely depends on the associated risk and not their automated nature. We further show that social control, especially in the form of formal traffic rules and norm enforcement, can reduce jaywalking behavior for any vehicle. Our study sheds light on the interaction dynamics between humans and AVs and how this is influenced by different forms of social control. It also contributes to the small gamification literature in this human–computer interaction.

  10. f

    Table_1_Looking at the Road When Driving Around Bends: Influence of Vehicle...

    • frontiersin.figshare.com
    • figshare.com
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Damien Schnebelen; Otto Lappi; Callum Mole; Jami Pekkanen; Franck Mars (2023). Table_1_Looking at the Road When Driving Around Bends: Influence of Vehicle Automation and Speed.DOCX [Dataset]. http://doi.org/10.3389/fpsyg.2019.01699.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Damien Schnebelen; Otto Lappi; Callum Mole; Jami Pekkanen; Franck Mars
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    When negotiating bends car drivers perform gaze polling: their gaze shifts between guiding fixations (GFs; gaze directed 1–2 s ahead) and look-ahead fixations (LAFs; longer time headway). How might this behavior change in autonomous vehicles where the need for constant active visual guidance is removed? In this driving simulator study, we analyzed this gaze behavior both when the driver was in charge of steering or when steering was delegated to automation, separately for bend approach (straight line) and the entry of the bend (turn), and at various speeds. The analysis of gaze distributions relative to bend sections and driving conditions indicate that visual anticipation (through LAFs) is most prominent before entering the bend. Passive driving increased the proportion of LAFs with a concomitant decrease of GFs, and increased the gaze polling frequency. Gaze polling frequency also increased at higher speeds, in particular during the bend approach when steering was not performed. LAFs encompassed a wide range of eccentricities. To account for this heterogeneity two sub-categories serving distinct information requirements are proposed: mid-eccentricity LAFs could be more useful for anticipatory planning of steering actions, and far-eccentricity LAFs for monitoring potential hazards. The results support the idea that gaze and steering coordination may be strongly impacted in autonomous vehicles.

  11. A Semantically Annotated 15-Class Ground Truth Dataset for Substation...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated May 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Gomes; Andreas Gomes (2023). A Semantically Annotated 15-Class Ground Truth Dataset for Substation Equipment [Dataset]. http://doi.org/10.5281/zenodo.7884270
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 5, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andreas Gomes; Andreas Gomes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains 1660 images of electric substations with 50705 annotated objects. The images were obtained using different cameras, including cameras mounted on Autonomous Guided Vehicles (AGVs), fixed location cameras and those captured by humans using a variety of cameras. A total of 15 classes of objects were identified in this dataset, and the number of instances for each class is provided in the following table:

    Object classes and how many times they appear in the dataset.
    ClassInstances
    Open blade disconnect310
    Closed blade disconnect switch5243
    Open tandem disconnect switch1599
    Closed tandem disconnect switch966
    Breaker980
    Fuse disconnect switch355
    Glass disc insulator3185
    Porcelain pin insulator26499
    Muffle1354
    Lightning arrester1976
    Recloser2331
    Power transformer768
    Current transformer2136
    Potential transformer654
    Tripolar disconnect switch2349

    All images in this dataset were collected from a single electrical distribution substation in Brazil over a period of two years. The images were captured at various times of the day and under different weather and seasonal conditions, ensuring a diverse range of lighting conditions for the depicted objects. A team of experts in Electrical Engineering curated all the images to ensure that the angles and distances depicted in the images are suitable for automating inspections in an electrical substation.

    The file structure of this dataset contains the following directories and files:

    images: This directory contains 1660 electrical substation images in JPEG format.

    images: This directory contains 1660 electrical substation images in JPEG format.

    • labels_json: This directory contains JSON files annotated in the VOC-style polygonal format. Each file shares the same filename as its respective image in the images directory.
    • 15_masks: This directory contains PNG segmentation masks for all 15 classes, including the porcelain pin insulator class. Each file shares the same name as its corresponding image in the images directory.
    • 14_masks: This directory contains PNG segmentation masks for all classes except the porcelain pin insulator. Each file shares the same name as its corresponding image in the images directory.
    • porcelain_masks: This directory contains PNG segmentation masks for the porcelain pin insulator class. Each file shares the same name as its corresponding image in the images directory.
    • classes.txt: This text file lists the 15 classes plus the background class used in LabelMe.
    • json2png.py: This Python script can be used to generate segmentation masks using the VOC-style polygonal JSON annotations.

    The dataset aims to support the development of computer vision techniques and deep learning algorithms for automating the inspection process of electrical substations. The dataset is expected to be useful for researchers, practitioners, and engineers interested in developing and testing object detection and segmentation models for automating inspection and maintenance activities in electrical substations.

    The authors would like to thank UTFPR for the support and infrastructure made available for the development of this research and COPEL-DIS for the support through project PD-2866-0528/2020—Development of a Methodology for Automatic Analysis of Thermal Images. We also would like to express our deepest appreciation to the team of annotators who worked diligently to produce the semantic labels for our dataset. Their hard work, dedication and attention to detail were critical to the success of this project.

  12. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
BikePGH (2024). Autonomous Vehicle Survey of Bicyclists and Pedestrians in Pittsburgh [Dataset]. https://data.wprdc.org/dataset/autonomous-vehicle-survey-of-bicyclists-and-pedestrians
Organization logo

Autonomous Vehicle Survey of Bicyclists and Pedestrians in Pittsburgh

Explore at:
7 scholarly articles cite this dataset (View in Google Scholar)
html, csv, csv(4235), csv(143937)Available download formats
Dataset updated
Jun 9, 2024
Dataset provided by
Bike Pittsburgh
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Area covered
Pittsburgh
Description

In Pittsburgh, Autonomous Vehicle (AV) companies have been testing autonomous vehicles since September 2016. However, the tech is new, and there have been some high-profile behavior that we believe warrants a larger conversation. So in early 2017, we set out to design a survey to see both how BikePGH donor-members, and Pittsburgh residents at large, feel about about sharing the road with AVs as a bicyclist and/or as a pedestrian. Our survey asked participants how they feel about being a fellow road user with AVs, either walking or biking. We also wanted to collect stories about people’s experiences interacting with this nascent technology. We are unaware of any public surveys about people’s feelings or understanding of this new technology. We hope that our results will help add to the body of data and help the public and politicians understand the complexity of possible futures that different economic models AV technology can bring to our cities and towncenters.

We conducted our 2017 survey in two parts. First, we launched the survey exclusively to donor-members, yielding 321 responses (out of 2,900) via email. Once we closed the survey, we launched it again, but allowed the general public to take it. Through promoting it on our website, social media channels, and a few news articles, we yielded 798 responses (mostly from people in the Pittsburgh region), for a combined total of 1,119 responses.

Regarding the 2019 survey: In total, 795 people responded. BikePGH solicited responses from their blog, website, and email list. There were also a few local news articles about the survey. While many questions were kept similar to the 2017 survey, BikePGH wanted to dig a bit deeper into regulations as well as demographics this time around.

The 2019 follow up survey also aims to see how the landscape has changed, and how specifically, Pittsburghers on bike and on foot feel about sharing the road with AVs so that we’re all better prepared to deal with this new reality and help make sure that it is introduced as safely as humanly possible.

Search
Clear search
Close search
Google apps
Main menu