Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In Pittsburgh, Autonomous Vehicle (AV) companies have been testing autonomous vehicles since September 2016. However, the tech is new, and there have been some high-profile behavior that we believe warrants a larger conversation. So in early 2017, we set out to design a survey to see both how BikePGH donor-members, and Pittsburgh residents at large, feel about about sharing the road with AVs as a bicyclist and/or as a pedestrian. Our survey asked participants how they feel about being a fellow road user with AVs, either walking or biking. We also wanted to collect stories about people’s experiences interacting with this nascent technology. We are unaware of any public surveys about people’s feelings or understanding of this new technology. We hope that our results will help add to the body of data and help the public and politicians understand the complexity of possible futures that different economic models AV technology can bring to our cities and towncenters.
We conducted our 2017 survey in two parts. First, we launched the survey exclusively to donor-members, yielding 321 responses (out of 2,900) via email. Once we closed the survey, we launched it again, but allowed the general public to take it. Through promoting it on our website, social media channels, and a few news articles, we yielded 798 responses (mostly from people in the Pittsburgh region), for a combined total of 1,119 responses.
Regarding the 2019 survey: In total, 795 people responded. BikePGH solicited responses from their blog, website, and email list. There were also a few local news articles about the survey. While many questions were kept similar to the 2017 survey, BikePGH wanted to dig a bit deeper into regulations as well as demographics this time around.
The 2019 follow up survey also aims to see how the landscape has changed, and how specifically, Pittsburghers on bike and on foot feel about sharing the road with AVs so that we’re all better prepared to deal with this new reality and help make sure that it is introduced as safely as humanly possible.
The nuScenes dataset is a large-scale autonomous driving dataset. The dataset has 3D bounding boxes for 1000 scenes collected in Boston and Singapore. Each scene is 20 seconds long and annotated at 2Hz. This results in a total of 28130 samples for training, 6019 samples for validation and 6008 samples for testing. The dataset has the full autonomous vehicle data suite: 32-beam LiDAR, 6 cameras and radars with complete 360° coverage. The 3D object detection challenge evaluates the performance on 10 classes: cars, trucks, buses, trailers, construction vehicles, pedestrians, motorcycles, bicycles, traffic cones and barriers.
DAWN emphasizes a diverse traffic environment (urban, highway and freeway) as well as a rich variety of traffic flow. The DAWN dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms. The dataset is annotated with object bounding boxes for autonomous driving and video surveillance scenarios. This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recent evidence suggests that the assumed conflict-avoidant programming of autonomous vehicles will incentivize pedestrians to bully them. However, this frequent argument disregards the embedded nature of social interaction. Rule violations are socially sanctioned by different forms of social control, which could moderate the rational incentive to abuse risk-avoidant vehicles. Drawing on a gamified virtual reality (VR) experiment (n = 36) of urban traffic scenarios, we tested how vehicle type, different forms of social control, and monetary benefit of rule violations affect pedestrians’ decision to jaywalk. In a second step, we also tested whether differences in those effects exist when controlling for the risk of crashes in conventional vehicles. We find that individuals do indeed jaywalk more frequently when faced with an automated vehicle (AV), and this effect largely depends on the associated risk and not their automated nature. We further show that social control, especially in the form of formal traffic rules and norm enforcement, can reduce jaywalking behavior for any vehicle. Our study sheds light on the interaction dynamics between humans and AVs and how this is influenced by different forms of social control. It also contributes to the small gamification literature in this human–computer interaction.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for aerial surveying routing problem. Consists of 11 base stations, 10 types of aircraft and 12 tasks that need to be visited. Each aircraft type has cruise speed, range and cost per hour specified.TaskToTask, BaseToTask and BaseToBase tables provide the distances in miles between two edges in the graph.AircraftsAvailable table provides information on how many aircraft are available at each base for the given aircraft type.Due to the size or type of the aircraft, not all base stations can support all aircraft types safely, so additional aircraft type constraints are applied for each base station - AircraftBaseConstraints.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Horizon 2020 project Dreams4Cars (www.dreams4cars.eu) has developed dream-like (offline) learning methods to be used for the development of Autonomous Driving and –more in general– as mechanisms to increase the Cognition abilities and Autonomy of robots. The purpose of dreamlike learning in Dreams4Cars is to deal with (possibly rare) dangerous events synthetizing correct behaviour and control without needing to experience the events, and more efficiently than via straightforward trial and errors. That is, to discover potential threats before they actually happen and prepare appropriate action strategies in advance.
During the 3-years development process the project has collected and processed a wealth of experimental data from autonomous test vehicles. Parts of these data and advice how to use these data are made available to the public.
The datasets and how they can be accessed is described in the attached report (project deliverable D5.5 Section 2), the datasets are provided in the ZIP-file.
Purpose of the Dataset
The data provided here have the purpose of demonstrating learning of forward models (the first building block of mental imagery and dreams). There are two sets of data: one for the lateral dynamics and another for the longitudinal dynamics. Each dataset has its own example of training of the corresponding forward model). Then following paper provides additional theoretical aspects: M. Da Lio, D. Bortoluzzi, e G. P. Rosati Papini, «Modelling longitudinal vehicle dynamics with neural networks», Vehicle System Dynamics, pagg. 1–19, lug. 2019, doi: 10.1080/00423114.2019.1638947
Contacts:
Mauro Da Lio, University of Trento, mauro.dalio@unitn.it
Elmar Berghoefer, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Elmar.Berghoefer@dfki.de
Mehmed Yueksel, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Mehmed.Yueksel@dfki.de
The project is a comprehensive observational and analytical program to examine the dynamics and source waters of the relaxation flows in a coastal upwelling system on the central California coast. Autonomous vehicles, high-frequency radars, moorings, and drifters, will be used to acquire pressure, density, and velocity data relevant to the relaxation flows. The data will be used to determine spatial scales of the flows, cross-shore density structure, cross-shore and alongshore velocity fields, pressure gradients, and the region of contact with the sea floor. Aspects of the research include: 1) to evaluate the roles of barotropic and baroclinic pressure gradient forcing, 2) to identify regions where ageostrophic flows dominate the cross-shore and alongshore momentum balances, 3) to determine source waters for the relaxation flows, and 4) to examine the inner shelf circulation response to wind relaxations over an extensive coastal region (the northern part of the Southern California Bight) by analyzing extensive regional data sets collected over many years.
Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
Perceived risk, or subjective risk, is an important concept in the field of traffic psychology and automated driving. In this paper, we investigate whether perceived risk in images of traffic scenes can be predicted from computer vision features that may also be used by automated vehicles (AVs). We conducted an international crowdsourcing study with 1378 participants, who rated the perceived risk of 100 randomly selected dashcam images on German roads. The population-level perceived risk was found to be statistically reliable, with a split-half reliability of 0.98. We used linear regression analysis to predict (r = 0.62) perceived risk from two features obtained with the YOLOv4 computer vision algorithm: the number of people in the scene and the mean size of the bounding boxes surrounding other road users. When the ego-vehicle’s speed was added as a predictor variable, the prediction strength increased to r = 0.75. Interestingly, the sign of the speed prediction was negative, indicating that a higher vehicle speed was associated with a lower perceived risk. This finding aligns with the principle of self-explaining roads. Our results suggest that computer-vision features and vehicle speed contribute to an accurate prediction of population subjective risk, outperforming the ratings provided by individual participants (mean r = 0.41). These findings may have implications for AV development and the modeling of psychological constructs in traffic psychology.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recent evidence suggests that the assumed conflict-avoidant programming of autonomous vehicles will incentivize pedestrians to bully them. However, this frequent argument disregards the embedded nature of social interaction. Rule violations are socially sanctioned by different forms of social control, which could moderate the rational incentive to abuse risk-avoidant vehicles. Drawing on a gamified virtual reality (VR) experiment (n = 36) of urban traffic scenarios, we tested how vehicle type, different forms of social control, and monetary benefit of rule violations affect pedestrians’ decision to jaywalk. In a second step, we also tested whether differences in those effects exist when controlling for the risk of crashes in conventional vehicles. We find that individuals do indeed jaywalk more frequently when faced with an automated vehicle (AV), and this effect largely depends on the associated risk and not their automated nature. We further show that social control, especially in the form of formal traffic rules and norm enforcement, can reduce jaywalking behavior for any vehicle. Our study sheds light on the interaction dynamics between humans and AVs and how this is influenced by different forms of social control. It also contributes to the small gamification literature in this human–computer interaction.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
When negotiating bends car drivers perform gaze polling: their gaze shifts between guiding fixations (GFs; gaze directed 1–2 s ahead) and look-ahead fixations (LAFs; longer time headway). How might this behavior change in autonomous vehicles where the need for constant active visual guidance is removed? In this driving simulator study, we analyzed this gaze behavior both when the driver was in charge of steering or when steering was delegated to automation, separately for bend approach (straight line) and the entry of the bend (turn), and at various speeds. The analysis of gaze distributions relative to bend sections and driving conditions indicate that visual anticipation (through LAFs) is most prominent before entering the bend. Passive driving increased the proportion of LAFs with a concomitant decrease of GFs, and increased the gaze polling frequency. Gaze polling frequency also increased at higher speeds, in particular during the bend approach when steering was not performed. LAFs encompassed a wide range of eccentricities. To account for this heterogeneity two sub-categories serving distinct information requirements are proposed: mid-eccentricity LAFs could be more useful for anticipatory planning of steering actions, and far-eccentricity LAFs for monitoring potential hazards. The results support the idea that gaze and steering coordination may be strongly impacted in autonomous vehicles.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains 1660 images of electric substations with 50705 annotated objects. The images were obtained using different cameras, including cameras mounted on Autonomous Guided Vehicles (AGVs), fixed location cameras and those captured by humans using a variety of cameras. A total of 15 classes of objects were identified in this dataset, and the number of instances for each class is provided in the following table:
Class | Instances |
---|---|
Open blade disconnect | 310 |
Closed blade disconnect switch | 5243 |
Open tandem disconnect switch | 1599 |
Closed tandem disconnect switch | 966 |
Breaker | 980 |
Fuse disconnect switch | 355 |
Glass disc insulator | 3185 |
Porcelain pin insulator | 26499 |
Muffle | 1354 |
Lightning arrester | 1976 |
Recloser | 2331 |
Power transformer | 768 |
Current transformer | 2136 |
Potential transformer | 654 |
Tripolar disconnect switch | 2349 |
All images in this dataset were collected from a single electrical distribution substation in Brazil over a period of two years. The images were captured at various times of the day and under different weather and seasonal conditions, ensuring a diverse range of lighting conditions for the depicted objects. A team of experts in Electrical Engineering curated all the images to ensure that the angles and distances depicted in the images are suitable for automating inspections in an electrical substation.
The file structure of this dataset contains the following directories and files:
images: This directory contains 1660 electrical substation images in JPEG format.
images: This directory contains 1660 electrical substation images in JPEG format.
The dataset aims to support the development of computer vision techniques and deep learning algorithms for automating the inspection process of electrical substations. The dataset is expected to be useful for researchers, practitioners, and engineers interested in developing and testing object detection and segmentation models for automating inspection and maintenance activities in electrical substations.
The authors would like to thank UTFPR for the support and infrastructure made available for the development of this research and COPEL-DIS for the support through project PD-2866-0528/2020—Development of a Methodology for Automatic Analysis of Thermal Images. We also would like to express our deepest appreciation to the team of annotators who worked diligently to produce the semantic labels for our dataset. Their hard work, dedication and attention to detail were critical to the success of this project.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In Pittsburgh, Autonomous Vehicle (AV) companies have been testing autonomous vehicles since September 2016. However, the tech is new, and there have been some high-profile behavior that we believe warrants a larger conversation. So in early 2017, we set out to design a survey to see both how BikePGH donor-members, and Pittsburgh residents at large, feel about about sharing the road with AVs as a bicyclist and/or as a pedestrian. Our survey asked participants how they feel about being a fellow road user with AVs, either walking or biking. We also wanted to collect stories about people’s experiences interacting with this nascent technology. We are unaware of any public surveys about people’s feelings or understanding of this new technology. We hope that our results will help add to the body of data and help the public and politicians understand the complexity of possible futures that different economic models AV technology can bring to our cities and towncenters.
We conducted our 2017 survey in two parts. First, we launched the survey exclusively to donor-members, yielding 321 responses (out of 2,900) via email. Once we closed the survey, we launched it again, but allowed the general public to take it. Through promoting it on our website, social media channels, and a few news articles, we yielded 798 responses (mostly from people in the Pittsburgh region), for a combined total of 1,119 responses.
Regarding the 2019 survey: In total, 795 people responded. BikePGH solicited responses from their blog, website, and email list. There were also a few local news articles about the survey. While many questions were kept similar to the 2017 survey, BikePGH wanted to dig a bit deeper into regulations as well as demographics this time around.
The 2019 follow up survey also aims to see how the landscape has changed, and how specifically, Pittsburghers on bike and on foot feel about sharing the road with AVs so that we’re all better prepared to deal with this new reality and help make sure that it is introduced as safely as humanly possible.