This site collates and visualizes critical indicators within the automotive vehicles and parts markets to enable firms to develop export strategies and identify target markets. These data include trade flows (exports and imports) of New Passenger Vehicles and Light Trucks, Medium- and Heavy-Duty Trucks, Used Vehicles, and Automotive Parts.
https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
The passenger vehicle telematics market offers an array of innovative and comprehensive telematics products, catering to the evolving needs of the automotive industry. These products encompass:
Hardware Devices: Advanced hardware components such as GPS receivers, sensors, communication modules, and cameras enable real-time data collection and transmission.
Software Platforms: These platforms provide powerful data analytics tools, mapping capabilities, visualization dashboards, and application programming interfaces (APIs), empowering users to gain actionable insights from telematics data.
Data Services: This segment involves the secure and reliable collection, procesamiento, and transmission of telematics data, ensuring its availability and accessibility for various applications, including third-party integrations and value-added services.
Recent developments include: March 2021 Ford introduced "Ford Liive," a linked uptime solution aimed at enhancing productivity for Ford commercial vehicle operators. The technology helps to increase vehicle uptime, minimize vehicle breakdowns, and lower vehicle maintenance needs., January 2021 To provide cutting-edge technological solutions for General Motors' forthcoming vehicles, Qualcomm Technologies Inc. teamed up with the automaker. With the integration of General Motor's advanced driver assistance systems and next-generation digital cockpit & telematics systems, the cooperation required the implementation of a comprehensive range of automotive solutions.. Notable trends are: Increasing automotive sales is driving the market growth.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by ِAli Amr
Released under Apache 2.0
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Fenil Vadher
Released under Apache 2.0
This is a breakdown of every collision in NYC by location and injury. This data is collected because the NYC Council passed Local Law #11 in 2011. This data is manually run every month and reviewed by the TrafficStat Unit before being posted on the NYPD website. Each record represents a collision in NYC by city, borough, precinct and cross street. This data can be used by the public to see how dangerous/safe intersections are in NYC. The information is presented in pdf and excel format to allow the casual user to just view the information in the easy to read pdf format or use the excel files to do a more in-depth analysis.
This visualization, based on a recent survey, depicts the percentage of drivers in selected countries worldwide willing to share personal data with manufacturers or dealers. The willingness to share this information appears to be particularly high in China, where 72 percent of respondents stated they were willing to share personal data if this meant that they would receive proactive maintenance alerts.
This list contains information on the status of current medallion vehicles authorized to operate in New York City. This list is accurate to the date and time represented in the Last Date Updated and Last Time Updated fields. For inquiries about the contents of this dataset, please email licensinginquiries@tlc.nyc.gov.
For a historical data up to and including 2016, please refer to https://data.cityofnewyork.us/Transportation/Historical-Medallion-Vehicles-Authorized/pvkv-25ck/
Data for removing derelict vehicle operations from city streets, Gives disposition (complaints) of derelict vehicles reported to DSNY from 311.
https://data.cityofnewyork.us/City-Government/Derelict-Vehicle-Dispositions-Tow/vr8p-8shw
https://data.cityofnewyork.us/City-Government/Derelict-Vehicle-Dispositions-Vehicles/bjuu-44hx
https://data.cityofnewyork.us/City-Government/Derelict-Vehicles-Dispositions-Rentals/v6j6-k9uc
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Visualization of connected vehicle trajectory data along a work zone on Indiana interstate I-69 in northbound direction for 15 miles section from mile marker location 245 to 260 using connected vehicle records on Thursday, May 11, 2023.
Use the Global New Vehicle Registrations dataset to monitor and analyze global new vehicle registration counts month by month.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In source data folder, we provide the source data of figures in manuscript. In Supplementary Data folder, we provide the explanatory document about the dataset, along with some dataset segments of SIND dataset. In addition, we have provided a MATLAB version of the AD4CHE visualization program and a Python version of the SIND visualization program. The usage of the visualization program is attached in the respective folder. Recorded scenarios include the input and output data of the Field test. Original traffic law includes the original traffic laws and the subdivided version.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Will you take vehicle insurance?
An insurance policy is an arrangement by which a company undertakes to provide a guarantee of compensation for specified loss, damage, illness, or death in return for the payment of a specified premium. A premium is a sum of money that the customer needs to pay regularly to an insurance company for this guarantee.
For example, you may pay a premium of Rs. 5000 each year for a health insurance cover of Rs. 200,000/- so that if, God forbid, you fall ill and need to be hospitalised in that year, the insurance provider company will bear the cost of hospitalisation etc. for upto Rs. 200,000. Now if you are wondering how can company bear such high hospitalisation cost when it charges a premium of only Rs. 5000/-, that is where the concept of probabilities comes in picture. For example, like you, there may be 100 customers who would be paying a premium of Rs. 5000 every year, but only a few of them (say 2-3) would get hospitalised that year and not everyone. This way everyone shares the risk of everyone else.
Just like medical insurance, there is vehicle insurance where every year customer needs to pay a premium of a certain amount to its insurance provider company so that in case of an unfortunate accident by the vehicle, the insurance provider company will provide compensation (called ‘sum assured’) to the customer.
Building a model to predict whether a customer would be interested in Vehicle Insurance is extremely helpful for the company because it can then accordingly plan its communication strategy to reach out to those customers and optimise its business model and revenue.
Now, in order to predict, whether the customer would be interested in Vehicle insurance, you have information about demographics (gender, age, region code type), Vehicles (Vehicle Age, Damage), Policy (Premium, sourcing channel) etc.
id: Unique ID for the customer Gender: Gender of the customer Age :: Age of the customer driving license: 0 :: Customer does not have DL, 1 : Customer already has DL RegionCode: Unique code for the region of the customer PreviouslyInsured 1 : Customer already has Vehicle Insurance, 0 : Customer doesn't have Vehicle Insurance VehicleAge: Age of the Vehicle VehicleDamage: 1 : Customer got his/her vehicle damaged in the past, 0 : Customer: Customer didn't get his/her vehicle damaged in the past. AnnualPremium: The amount customer needs to pay as premium in the year PolicySalesChannel: Anonymised Code for the channel of outreaching to the customer ie. Different Agents, Over Mail, Over Phone, In Person, etc. Vintage: Number of Days, Customer has been associated with the company Response: 1: Customer is interested, 0 : Customer is not interested
We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.
Original DatabSource Analytics Vidhya: https://datahack.analyticsvidhya.com/contest/janatahack-cross-sell-prediction/#About
Your data will be in front of the world's largest data science community. What questions do you want to see answered?
https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
Vehicle Telematics Devices: Telematics devices collect and transmit vehicle data, providing key insights for analytics.Analytics Platforms: Analytics platforms analyze and visualize vehicle data, enabling organizations to make informed decisions.Connected Vehicle Infrastructure: Connected vehicle infrastructure facilitates data sharing and communication between vehicles and infrastructure.Mobile Applications: Mobile applications provide convenient access to vehicle analytics data for fleet managers and drivers. Recent developments include: October 2022: BMW partnered with Amazon Web Services (AWS) to develop software to collect and analyze data generated by connected vehicles. The data collection would expedite the development of features to enhance software life cycle management., May 2022: Red Hat, Inc., one of the global providers of open-source solutions, and General Motors announced a partnership to accelerate the development of software-defined cars at the edge. The businesses plan to build an innovation ecosystem around the Red Hat In-Vehicle Operating System, which provides a functional-safety certified Linux operating system basis for GM's Ultifi software platform's continued evolution.. Key drivers for this market are: INCREASING DEMAND FOR ENHANCED VEHICLE SAFETY FEATURES 45, GOVERNMENT REGULATIONS MANDATING THE ADOPTION OF ADAS TECHNOLOGY 46; GROWING CONSUMER AWARENESS AND ACCEPTANCE OF ADVANCED DRIVER-ASSISTANCE SYSTEMS 47. Potential restraints include: HIGH COST OF ADAS TECHNOLOGY 48, LACK OF STANDARDIZATION AND INTEROPERABILITY AMONG DIFFERENT ADAS SYSTEMS 48; CONCERNS REGARDING DATA PRIVACY AND CYBERSECURITY 48. Notable trends are: Predictive Maintenance is boosting the market growth.
This dataset summarizes characteristics of 11 land use efficiency visualization tools that address vehicle miles traveled, gentrification, and equity. Summary characteristics include the tools' purpose, year of data or publication, data sources, methods used, units of anlaysis, and evaluation of the tool and ease of use. Links to tools and documentation are included.
This link contains data and visualizations to the total number of vehicles owned by households based off the household size.
About the Data: The dataset includes recall information related to specific NHTSA campaigns. Users can filter based on characteristics like manufacturer and component. The dataset can also be filtered by recall type: tires, vehicles, car seats, and equipment. The earliest campaign data is from 1966. The dataset displays the completion rate from the latest Recall Quarterly Report or Annual Report data from Year 2015 Quarter 1 (2015-1) onward. Data Reporting Requirement: Manufacturers who determine that a product or piece of original equipment either contains a safety defect or is not in compliance with Federal safety standards are required to notify NHTSA within 5 business days. NHTSA requires that manufacturers file a Defect and Noncompliance Report in compliance with Federal Regulation 49 (the National Traffic and Motor Safety Act) Part 573, which identifies the requirements for safety recalls. This information is stored in the NHTSA database referenced above. Notes: The default visualization depicted here represents only the top 12 manufacturers for the current calendar year. Please use the Filters for specific data requests. For a complete historical perspective, please visit: https://www.nhtsa.gov/sites/nhtsa.gov/files/2023-03/2022-Recalls-Annual-Report_030223-tag.pdf.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Available online at www.nrel.gov/fleetdna, the Fleet DNA clearinghouse of commercial fleet vehicle operating data helps manufacturers and developers optimize vehicle designs and helps fleet managers choose advanced technologies for their fleets. Sponsored by the U.S. Department of Energy, this online tool provides data summaries and visualizations similar to real-world genetics for medium- and heavy-duty commercial fleet vehicles operating in a variety of vocations. If you use Fleet DNA data in a publication, please notify NREL at fleetdna@nrel.gov and include a citation consistent with the following format: “Fleet DNA Project Data.” ([YEAR]). National Renewable Energy Laboratory. Accessed [DATE]: www.nrel.gov/fleetdna.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
With the increase in quantity and complexity of launches at the Wallops Flight Facility (WFF) there is an ever-growing need for a more capable real-time visualization system for the WFF Range Control Center (RCC). This system should have the ability to depict the vehicle using actual CAD vehicle models, display vehicle attitude and stage separation events, and utilize robust network protocol suitable for real-time safety applications. This project will use existing WFF hardware systems and leverage past experiences and lessons learned to produce a Visualization in Real-Time Experiment (VIRTEx) application that will use a cutting edge message protocol for lab demonstration and use during real-time operations.
The objective of this project will be to migrate some of the outputs from the WFF Mission Planning Lab (MPL) into a real-time visualization system. The MPL is responsible for generating pre-flight RF margin link analysis, mission simulation & visualization, and other products for WFF missions. This real-time visualization system would depict in 3D graphics the position and orientation of the launch vehicle(s) or suborbital carrier (UAV, sounding rocket), VIRTEx would be expanded to use a more flexible publish/subscribe architecture, and the system will leverage recently developed advanced telemetry and data handling systems within the Range network.
Another main objective will be updating VIRTEx to support a sounding rocket mission which is scheduled to launch from NASA Wallops Flight Facility (WFF) in the summer of 2014.
This project will also be used to demonstrate the successful attitude data conversion from a WFF telemetry system. Updates are being finished on this telemetry system that convert various NASA Sounding Rocket attitude control systems (ACS) data formats. Multiple ACS systems output different data formats, so libraries and algorithms were added to the telemetry system to convert this data into a standard yaw, pitch, and roll dataset for Range Safety. VIRTEx will be able to easily show this data and will be able to compare it to the pre-flight attitude predictions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
Python tools to read, generate, and visualize the dataset,
3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection.
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
Benjamin Sertolli: Major contributions to our DevKit during his master thesis
Niels Maier: Measurement campaign for data collection and data preparation
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data)
Map items are stored as lists of items in JSON format. In particular, we provide:
traffic signs,
traffic lights,
pole-like objects,
construction site locations,
construction site obstacles (point-like such as cones, and line-like such as fences),
line-shaped markings (solid, dashed, etc.),
polygon-shaped markings (arrows, stop lines, symbols, etc.),
lanes (ordinary and temporary),
relations between elements (only for construction sites, e.g., sign to lane association).
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
x-coordinates: 4 byte integer
y-coordinates: 4 byte integer
z-coordinates: 4 byte integer
intensity of reflected beams: 2 byte unsigned integer
ground classification flag: 1 byte unsigned integer
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['<i4', '<i4', '<i4', '<u2', 'u1']
with open(file_path, "r") as fid: num_points = np.fromfile(fid, count=1, dtype='<u4')[0] # print(num_points)
# Init
for k, dtype in zip(key_list, type_list):
pc_dict[k] = np.zeros([num_points], dtype=dtype)
# Read all arrays
for k, t in zip(key_list, type_list):
pc_dict[k] = np.fromfile(fid, count=num_points, dtype=t)
# Unnorm
pc_dict['x'] = (pc_dict['x'] / 1000) + 500000
pc_dict['y'] = (pc_dict['y'] / 1000) + 5000000
pc_dict['z'] = (pc_dict['z'] / 1000)
pc_dict['intensity'] = pc_dict['intensity'] / 2**16
pc_dict['is_ground'] = pc_dict['is_ground'].astype(np.bool_)
fid.close()
print(pc_dict)
x_utm = pc_dict['x'] - np.mean(pc_dict['x']) y_utm = pc_dict['y'] - np.mean(pc_dict['y']) z_utm = pc_dict['z'] xyz = np.column_stack((x_utm, y_utm, z_utm)) viewer = pptk.viewer(xyz) viewer.attributes(pc_dict['intensity']) viewer.set(point_size=0.03)
We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.
These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.
OP1 – OP5 cover the majority of the map with 5 trajectories.
RH1 – RH10 cover the majority of the map with 10 trajectories.
Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.
https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
"Road Accidents Dataset":
Description: This comprehensive dataset provides detailed information on road accidents reported over multiple years. The dataset encompasses various attributes related to accident status, vehicle and casualty references, demographics, and severity of casualties. It includes essential factors such as pedestrian details, casualty types, road maintenance worker involvement, and the Index of Multiple Deprivation (IMD) decile for casualties' home areas.
Columns: 1. Status: The status of the accident (e.g., reported, under investigation). 2. Accident_Index: A unique identifier for each reported accident. 3. Accident_Year: The year in which the accident occurred. 4. Accident_Reference: A reference number associated with the accident. 5. Vehicle_Reference: A reference number for the involved vehicle in the accident. 6. Casualty_Reference: A reference number for the casualty involved in the accident. 7. Casualty_Class: Indicates the class of the casualty (e.g., driver, passenger, pedestrian). 8. Sex_of_Casualty: The gender of the casualty (male or female). 9. Age_of_Casualty: The age of the casualty. 10. Age_Band_of_Casualty: Age group to which the casualty belongs (e.g., 0-5, 6-10, 11-15). 11. Casualty_Severity: The severity of the casualty's injuries (e.g., fatal, serious, slight). 12. Pedestrian_Location: The location of the pedestrian at the time of the accident. 13. Pedestrian_Movement: The movement of the pedestrian during the accident. 14. Car_Passenger: Indicates whether the casualty was a car passenger at the time of the accident (yes or no). 15. Bus_or_Coach_Passenger: Indicates whether the casualty was a bus or coach passenger (yes or no). 16. Pedestrian_Road_Maintenance_Worker: Indicates whether the casualty was a road maintenance worker (yes or no). 17. Casualty_Type: The type of casualty (e.g., driver/rider, passenger, pedestrian). 18. Casualty_Home_Area_Type: The type of area in which the casualty resides (e.g., urban, rural). 19. Casualty_IMD_Decile: The IMD decile of the area where the casualty resides (a measure of deprivation). 20. LSOA_of_Casualty: The Lower Layer Super Output Area (LSOA) associated with the casualty's location.
This dataset provides valuable insights for analyzing road accidents, identifying trends, and implementing safety measures to reduce casualties and enhance road safety. Researchers, policymakers, and analysts can leverage this dataset for evidence-based decision-making and improving overall road transportation systems.
This site collates and visualizes critical indicators within the automotive vehicles and parts markets to enable firms to develop export strategies and identify target markets. These data include trade flows (exports and imports) of New Passenger Vehicles and Light Trucks, Medium- and Heavy-Duty Trucks, Used Vehicles, and Automotive Parts.