Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
This repository contains the dataset used in the paper Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models via Vision-Guided Reinforcement Learning. Code: https://github.com/jefferyZhan/Griffon
Facebook
TwitterA. SUMMARY This dataset contains the underlying data for the Vision Zero Benchmarking website. Vision Zero is the collaborative, citywide effort to end traffic fatalities in San Francisco. The goal of this benchmarking effort is to provide context to San Francisco’s work and progress on key Vision Zero metrics alongside its peers. The Controller's Office City Performance team collaborated with the San Francisco Municipal Transportation Agency, the San Francisco Department of Public Health, the San Francisco Police Department, and other stakeholders on this project. B. HOW THE DATASET IS CREATED The Vision Zero Benchmarking website has seven major metrics. The City Performance team collected the data for each metric separately, cleaned it, and visualized it on the website. This dataset has all seven metrics and some additional underlying data. The majority of the data is available through public sources, but a few data points came from the peer cities themselves. C. UPDATE PROCESS This dataset is for historical purposes only and will not be updated. To explore more recent data, visit the source website for the relevant metrics. D. HOW TO USE THIS DATASET This dataset contains all of the Vision Zero Benchmarking metrics. Filter for the metric of interest, then explore the data. Where applicable, datasets already include a total. For example, under the Fatalities metric, the "Total Fatalities" category within the metric shows the total fatalities in that city. Any calculations should be reviewed to not double-count data with this total. E. RELATED DATASETS N/A
Facebook
TwitterOpen Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
131 Global import shipment records of Vision with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.
Facebook
TwitterThe Fataility Analysis Reporting System (FARS) dataset is as of July 1, 2017, and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics's (BTS's) National Transportation Atlas Database (NTAD). One of the primary objectives of the National Highway Traffic Safety Administration (NHTSA) is to reduce the staggering human toll and property damage that motor vehicle traffic crashes impose on our society. FARS is a census of fatal motor vehicle crashes with a set of data files documenting all qualifying fatalities that occurred within the 50 States, the District of Columbia, and Puerto Rico since 1975. To qualify as a FARS case, the crash had to involve a motor vehicle traveling on a trafficway customarily open to the public, and must have resulted in the death of a motorist or a non-motorist within 30 days of the crash. This data file contains information about crash characteristics and environmental conditions at the time of the crash. There is one record per crash. Please note: 207 records in this database were geocoded to latitude and logtitude of 0,0 due to lack of location information or errors in the reported locations. FARS data are made available to the public in Statistical Analysis System (SAS) data files as well as Database Files (DBF). Over the years changes have been made to the type of data collected and the way the data are presented in the SAS data files. Some data elements have been dropped and new ones added, coding of individual data elements has changed, and new SAS data files have been created. Coding changes and the years for which individual data items are available are shown in the “Data Element Definitions and Codes” section of this document. The FARS Coding and Editing Manual contains a detailed description of each SAS data elements including coding instructions and attribute definitions. The Coding Manual is published for each year of data collection. Years 2001 to current are available at: http://www-nrd.nhtsa.dot.gov/Cats/listpublications.aspx?Id=J&ShowBy=DocType Note: In this manual the word vehicle means in-transport motor vehicle unless otherwise noted.
Facebook
TwitterTufts Face Database is the most comprehensive, large-scale (over 10,000 images, 74 females + 38 males, from more than 15 countries with an age range between 4 to 70 years old) face dataset that contains 7 image modalities: visible, near-infrared, thermal, computerized sketch, LYTRO, recorded video, and 3D images. This webpage/dataset contains the Tufts Face Database three-dimensional (3D) images. The other datasets are made available through separate links by the user.
Cross-modality face recognition is an emerging topic due to the wide-spread usage of different sensors in day-to-day life applications. The development of face recognition systems relies greatly on existing databases for evaluation and obtaining training examples for data-hungry machine learning algorithms. However, currently, there is no publicly available face database that includes more than two modalities for the same subject. In this work, we introduce the Tufts Face Database that includes images acquired in various modalities: photograph images, thermal images, near infrared images, a recorded video, a computerized facial sketch, and 3D images of each volunteer’s face. An Institutional Research Board protocol was obtained, and images were collected from students, staff, faculty, and their family members at Tufts University.
This database will be available to researchers worldwide in order to benchmark facial recognition algorithms for sketch, thermal, NIR, 3D face recognition and heterogamous face recognition.
Tufts Face Database Thermal Cropped (TD_IR_Cropped) Emotion only
Tufts Face Database Night Vision (NIR) (TD_NIR) (Check Note)
Note: Please use http instead of https. The link appears broken when https is used.
Each participant was seated in front of a blue background in close proximity to the camera. The cameras were mounted on tripods and the height of each camera was adjusted manually to correspond to the image center. The distance to the participant was strictly controlled during the acquisition process. A constant lighting condition was maintained using diffused lights.
TD_CS: Computerized facial sketches were generated using software FACES 4.0 [1], one of the most widely used software packages by law enforcement agencies, the FBI, and the US Military. The software allows researchers to choose a set of candidate facial components from the database based on their observation or memory.
TD_3D: The images were captured using a quad camera (an array of 4 cameras). Each individual was asked to look at a fixed view-point while the cameras were moved to 9 equidistant positions forming an approximate semi-circle around the individual. The 3D models were reconstructed using open-source structure-from-motion algorithms.
TD_IR_E(E stands for expression/emotion): The images were captured using a FLIR Vue Pro camera. Each participant was asked to pose with (1) a neutral expression, (2) a smile, (3) eyes closed, (4) exaggerated shocked expression, (5) sunglasses.
TD_IR_A (A stands for around): The images were captured using a FLIR Vue Pro camera. Each participant was asked to look at a fixed view-point while the cameras were moved to 9 equidistant positions forming an approximate semi-circle around the participant .
TD_RGB_E: The images were captured using a NIKON D3100 camera. Each participant was asked to pose with (1) a neutral expression, (2) a smile, (3) eyes closed, (4) exaggerated shocked expression, (5) sunglasses.
TD_RGB_A: The images were captured using a quad camera (an array of 4 visible field cameras). Each participant was asked to look at a fixed view-point while the cameras were moved to 9 equidistant positions forming an approximate semi-circle around the participant.
TD_NIR_A: The images were captured using a quad camera (an array of 4 night vision cameras). The l...
Facebook
TwitterMission statement, Vision and Core Values of the OIG. Includes links to the OIG FY 2015 Action Plan, OIG Status Report on NAPA Recommendations, 2012 and the OIG Organizational Assessment, National Academy of Public Administration (NAPA), 2009
Facebook
TwitterThe Los Angeles Department of Transportation developed a transportation and health database that includes all collisions in the most recently available five-year period, as well as key environmental variables. These data, currently available on the City’s GeoHub, will be continually be updated as new information becomes available. The purpose for developing it was 1. to help the City identify a list of prioritized locations along the High Injury Network (HIN) for the development of safety projects and 2. to develop "countermeasure pairing," the process of identifying the physical design and engineering countermeasures that would most effectively address each "collision profile," a group of collisions with similar contributing factors. The Vision Zero Los Angeles initiative used a hierarchical clustering to develop these LA-specific collision profiles, intersection profile counts, and collision intersection scores. The data found here is a result of that work and will be used in the development of our Action Plan. Please reference the Vision Zero GIS Data Dictionary.pdf for key field names and descriptions.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Description: This dataset contains the eye prescription details of 1000 patients, including their names, ages, and optical prescription parameters for both the right and left eyes. The dataset is designed for optometry research, machine learning applications in ophthalmology, and statistical analysis of vision-related data.
Dataset Features: Name: The full name of the patient. Age: The age of the patient (ranging from young adults to elderly individuals). SPH (Spherical): Measures the lens power needed to correct nearsightedness (-) or farsightedness (+). CYL (Cylindrical): Measures the degree of astigmatism, if present. Axis: The orientation of astigmatism correction in degrees (0-180°). Right Eye (SPH, CYL, Axis): Prescription details for the right eye. Left Eye (SPH, CYL, Axis): Prescription details for the left eye. This dataset can be used to analyze trends in vision impairments, develop predictive models for vision correction, and study the distribution of refractive errors in a population.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Utility values according to glaucoma subtype, better eye cup-to-disc ratio, FDT score, and vision loss.
Facebook
TwitterCE Vision Europe is a merchant attributable transaction data set tracking credit, debit, direct debit, and direct transfer consumer spend in Austria, France, Germany, Italy, Spain, and the UK. Track market share, customer insights, retail shopping patterns by demo and geo, and market dynamics.
Facebook
TwitterVision Llc Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.
Facebook
TwitterPublic Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
License information was derived automatically
A workshop was jointly convened by the Pacificc Islands Forum Secretariat (PIFS) and SPREP in March 2012 in Fiji to provide a vision for more effective and streamlined reporting in the Pacific region.
Facebook
TwitterExplore detailed Roush import data of Vision Wheels Inc in the USA—product details, price, quantity, origin countries, and US ports.
Facebook
TwitterThis dataset provides information about the number of properties, residents, and average property values for Spiral Vision Road cross streets in Elizabethton, TN.
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/27862/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/27862/terms
The RAND Center for Population Health and Health Disparities (CPHHD) Data Core Series is composed of a wide selection of analytical measures, encompassing a variety of domains, all derived from a number of disparate data sources. The CPHHD Data Core's central focus is on geographic measures for census tracts, counties, and Metropolitan Statistical Areas (MSAs) from two distinct geo-reference points, 1990 and 2000. The current study, Disability, contains cross-sectional data from the year 2000. Based on the Decennial Census Special Table Series published by the Administration on Aging, this study contains a large number of disability measures categorized by age (55+), type of disability (sensory, learning, employment, and self-care), and poverty status.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This folder contains the neuromorphic vision dataset named as 'CIFAR10-DVS' obtained by displaying the moving images of the CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) on a LCD monitor. The dataset is used for event-driven scene classification and pattern recognition. These recordings can be displayed using the jAER software (http://sourceforge.net/p/jaer/wiki/Home) using filters DVS128.
The files "dat2mat.m" and "mat2dat.m" in (http://www2.imse-cnm.csic.es/caviar/MNIST_DVS/) can be used to transfer lists of events between jAER format (.dat or .aedat) and matlab.
Please cite it if you intend to use this dataset. Li H, Liu H, Ji X, Li G and Shi L (2017) CIFAR10-DVS: An Event-Stream Dataset for Object Classification. Front. Neurosci. 11:309. doi: 10.3389/fnins.2017.00309
The high-sensitivity DVS used in the recording reported in:P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits, vol. 43, no. 2, pp. 566–576, Feb. 2008
A single 128x128 pixel DVS sensor was placed in front of a 24" LCD monitor. Images of CIFAR-10 were upscaled to 512 * 512 through bicubic interpolation, and displayed on the LCD monitor with circulating smooth movement. A total of 10,000 event-stream recordings in 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck) with 1000 recordings per classes were obtained.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Panel Presentation on February 17, 2023; San Juan Puerto Rico 14th CECIA-IAUPR Biennial Symposium on Potable Water Issues in Puerto Rico: Science, Technology and Regulation
Presenters: Dr. Christina Norton, University of Washington Christopher Lenhardt, RENCI, University of North Carolina at Chapel Hill Dr. Elaine Faustman, University of Washington Jill Falman, University of Washington
Facebook
TwitterThis dataset will be moving! The City is working on a new Open Data Portal for GIS data. This dataset will soon be available at https://data-seattlecitygis.opendata.arcgis.com/. We apologize for any inconvenience, but this new platform will allow us to regularly update our data and provided better tools for our spatial data. https://gisrevprxy.seattle.gov/arcgis/rest/services/SDOT_EXT/DSG_datasharing/MapServer/68
Facebook
TwitterThis layer shows six different types of disability. This is shown by tract, county, and state boundaries. This service is updated annually to contain the most currently released American Community Survey (ACS) 5-year data, and contains estimates and margins of error. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. This layer is symbolized to show the percent of population with a disability. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Current Vintage: 2019-2023ACS Table(s): B18101, B18102, B18103, B18104, B18105, B18106, B18107, C18108 (Not all lines of these ACS tables are available in this feature layer.)Data downloaded from: Census Bureau's API for American Community Survey Date of API call: December 12, 2024National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Click here to learn more about ACS data releases.Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2023 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters).The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
This repository contains the dataset used in the paper Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models via Vision-Guided Reinforcement Learning. Code: https://github.com/jefferyZhan/Griffon