100+ datasets found
  1. h

    Vision-R1-Data

    • huggingface.co
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YufeiZhan (2025). Vision-R1-Data [Dataset]. https://huggingface.co/datasets/JefferyZhan/Vision-R1-Data
    Explore at:
    Dataset updated
    Jun 3, 2025
    Authors
    YufeiZhan
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    This repository contains the dataset used in the paper Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models via Vision-Guided Reinforcement Learning. Code: https://github.com/jefferyZhan/Griffon

  2. d

    Vision Zero Benchmarking

    • catalog.data.gov
    • data.sfgov.org
    • +2more
    Updated Mar 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.sfgov.org (2025). Vision Zero Benchmarking [Dataset]. https://catalog.data.gov/dataset/vision-zero-benchmarking
    Explore at:
    Dataset updated
    Mar 29, 2025
    Dataset provided by
    data.sfgov.org
    Description

    A. SUMMARY This dataset contains the underlying data for the Vision Zero Benchmarking website. Vision Zero is the collaborative, citywide effort to end traffic fatalities in San Francisco. The goal of this benchmarking effort is to provide context to San Francisco’s work and progress on key Vision Zero metrics alongside its peers. The Controller's Office City Performance team collaborated with the San Francisco Municipal Transportation Agency, the San Francisco Department of Public Health, the San Francisco Police Department, and other stakeholders on this project. B. HOW THE DATASET IS CREATED The Vision Zero Benchmarking website has seven major metrics. The City Performance team collected the data for each metric separately, cleaned it, and visualized it on the website. This dataset has all seven metrics and some additional underlying data. The majority of the data is available through public sources, but a few data points came from the peer cities themselves. C. UPDATE PROCESS This dataset is for historical purposes only and will not be updated. To explore more recent data, visit the source website for the relevant metrics. D. HOW TO USE THIS DATASET This dataset contains all of the Vision Zero Benchmarking metrics. Filter for the metric of interest, then explore the data. Where applicable, datasets already include a total. For example, under the Fatalities metric, the "Total Fatalities" category within the metric shows the total fatalities in that city. Any calculations should be reviewed to not double-count data with this total. E. RELATED DATASETS N/A

  3. Vision Service Plan (VSP) – Vision and Eye Health Surveillance

    • data.cdc.gov
    • data.virginia.gov
    • +3more
    Updated Mar 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention (2025). Vision Service Plan (VSP) – Vision and Eye Health Surveillance [Dataset]. https://data.cdc.gov/widgets/4r3g-hv9c
    Explore at:
    kmz, xml, csv, application/geo+json, kml, xlsxAvailable download formats
    Dataset updated
    Mar 7, 2025
    Dataset authored and provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description
    1. This dataset is a de-identified summary table of vision and eye health data indicators from VSP, stratified by all available combinations of age group, race/ethnicity, sex, and state. VSP claims for VEHSS provides a convenience sample of vision insurance members representing approximately more than 1 in 4 of the U.S. population. VSP uses a web-based claims submissions system to collect and process claims. The denominator of the rates represents persons with VSP benefits as reported by employers, and is subject to some uncertainty. VSP data for VEHSS include Service Utilization and Eye Health Condition indicators. Certain ophthalmic conditions and procedures are covered by health insurance and are not covered by managed vision insurance, claims based eye disease prevalence may therefore be expected to undercount true prevalence. Person level claims and person counts are not publically available. Reported rates were suppressed for de-identification to ensure protection of patient privacy. Detailed information on VEHSS VSP analyses can be found on the VEHSS VSP webpage (link). Information on VSP data can be found on the VSP website (www.vsp.com). The VEHSS VSP dataset was last updated in June 2018.
  4. v

    Global import data of Vision

    • volza.com
    csv
    Updated Oct 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Volza FZ LLC (2025). Global import data of Vision [Dataset]. https://www.volza.com/imports-united-states/united-states-import-data-of-vision-from-germany
    Explore at:
    csvAvailable download formats
    Dataset updated
    Oct 31, 2025
    Dataset authored and provided by
    Volza FZ LLC
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Count of importers, Sum of import value, 2014-01-01/2021-09-30, Count of import shipments
    Description

    131 Global import shipment records of Vision with prices, volume & current Buyer's suppliers relationships based on actual Global export trade database.

  5. w

    Vision

    • data.wu.ac.at
    csv, json, xls
    Updated Jul 27, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Transportation (2017). Vision [Dataset]. https://data.wu.ac.at/schema/public_opendatasoft_com/dmlzaW9u
    Explore at:
    csv, xls, jsonAvailable download formats
    Dataset updated
    Jul 27, 2017
    Dataset provided by
    U.S. Department of Transportation
    Description

    The Fataility Analysis Reporting System (FARS) dataset is as of July 1, 2017, and is part of the U.S. Department of Transportation (USDOT)/Bureau of Transportation Statistics's (BTS's) National Transportation Atlas Database (NTAD). One of the primary objectives of the National Highway Traffic Safety Administration (NHTSA) is to reduce the staggering human toll and property damage that motor vehicle traffic crashes impose on our society. FARS is a census of fatal motor vehicle crashes with a set of data files documenting all qualifying fatalities that occurred within the 50 States, the District of Columbia, and Puerto Rico since 1975. To qualify as a FARS case, the crash had to involve a motor vehicle traveling on a trafficway customarily open to the public, and must have resulted in the death of a motorist or a non-motorist within 30 days of the crash. This data file contains information about crash characteristics and environmental conditions at the time of the crash. There is one record per crash. Please note: 207 records in this database were geocoded to latitude and logtitude of 0,0 due to lack of location information or errors in the reported locations. FARS data are made available to the public in Statistical Analysis System (SAS) data files as well as Database Files (DBF). Over the years changes have been made to the type of data collected and the way the data are presented in the SAS data files. Some data elements have been dropped and new ones added, coding of individual data elements has changed, and new SAS data files have been created. Coding changes and the years for which individual data items are available are shown in the “Data Element Definitions and Codes” section of this document. The FARS Coding and Editing Manual contains a detailed description of each SAS data elements including coding instructions and attribute definitions. The Coding Manual is published for each year of data collection. Years 2001 to current are available at: http://www-nrd.nhtsa.dot.gov/Cats/listpublications.aspx?Id=J&ShowBy=DocType Note: In this manual the word vehicle means in-transport motor vehicle unless otherwise noted.

  6. Tufts Face Database

    • kaggle.com
    zip
    Updated May 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panetta's Vision and Sensing System Lab (2019). Tufts Face Database [Dataset]. https://www.kaggle.com/datasets/kpvisionlab/tufts-face-database/suggestions
    Explore at:
    zip(930 bytes)Available download formats
    Dataset updated
    May 9, 2019
    Authors
    Panetta's Vision and Sensing System Lab
    Description

    Tufts-Face-Database

    Multi-modal face images (112 participants, >100,000 images in total)

    7 image modalities: visible, near-infrared, thermal, computerized sketch, video, LYTRO and 3D images

    Context

    Tufts Face Database is the most comprehensive, large-scale (over 10,000 images, 74 females + 38 males, from more than 15 countries with an age range between 4 to 70 years old) face dataset that contains 7 image modalities: visible, near-infrared, thermal, computerized sketch, LYTRO, recorded video, and 3D images. This webpage/dataset contains the Tufts Face Database three-dimensional (3D) images. The other datasets are made available through separate links by the user.

    Cross-modality face recognition is an emerging topic due to the wide-spread usage of different sensors in day-to-day life applications. The development of face recognition systems relies greatly on existing databases for evaluation and obtaining training examples for data-hungry machine learning algorithms. However, currently, there is no publicly available face database that includes more than two modalities for the same subject. In this work, we introduce the Tufts Face Database that includes images acquired in various modalities: photograph images, thermal images, near infrared images, a recorded video, a computerized facial sketch, and 3D images of each volunteer’s face. An Institutional Research Board protocol was obtained, and images were collected from students, staff, faculty, and their family members at Tufts University.

    This database will be available to researchers worldwide in order to benchmark facial recognition algorithms for sketch, thermal, NIR, 3D face recognition and heterogamous face recognition.

    Links to modalities of the Tufts Face Database

    1. Tufts Face Database Computerized Sketches (TD_CS)

    2. Tufts Face Database Thermal (TD_IR) Around+Emotion

    3. Tufts Face Database Thermal Cropped (TD_IR_Cropped) Emotion only

    4. Tufts Face Database Three Dimensional (3D) (TD_3D)

    5. Tufts Face Database Lytro (TD_LYT) (Check Note)

    6. Tufts Face Database 2D RGB Around (TD_RGB_A) (Check Note)

    7. Tufts Face Database 2D RGB Emotion (TD_RGB_E) (Check Note)

    8. Tufts Face Database Night Vision (NIR) (TD_NIR) (Check Note)

    9. Tufts Face Database Video (TD_VIDEO) (Check Note)

    10. Tufts Face Thermal2RGB Dataset

    Note: Please use http instead of https. The link appears broken when https is used.

    Image Acquisition

    Each participant was seated in front of a blue background in close proximity to the camera. The cameras were mounted on tripods and the height of each camera was adjusted manually to correspond to the image center. The distance to the participant was strictly controlled during the acquisition process. A constant lighting condition was maintained using diffused lights.

    TD_CS: Computerized facial sketches were generated using software FACES 4.0 [1], one of the most widely used software packages by law enforcement agencies, the FBI, and the US Military. The software allows researchers to choose a set of candidate facial components from the database based on their observation or memory.

    TD_3D: The images were captured using a quad camera (an array of 4 cameras). Each individual was asked to look at a fixed view-point while the cameras were moved to 9 equidistant positions forming an approximate semi-circle around the individual. The 3D models were reconstructed using open-source structure-from-motion algorithms.

    TD_IR_E(E stands for expression/emotion): The images were captured using a FLIR Vue Pro camera. Each participant was asked to pose with (1) a neutral expression, (2) a smile, (3) eyes closed, (4) exaggerated shocked expression, (5) sunglasses.

    TD_IR_A (A stands for around): The images were captured using a FLIR Vue Pro camera. Each participant was asked to look at a fixed view-point while the cameras were moved to 9 equidistant positions forming an approximate semi-circle around the participant .

    TD_RGB_E: The images were captured using a NIKON D3100 camera. Each participant was asked to pose with (1) a neutral expression, (2) a smile, (3) eyes closed, (4) exaggerated shocked expression, (5) sunglasses.

    TD_RGB_A: The images were captured using a quad camera (an array of 4 visible field cameras). Each participant was asked to look at a fixed view-point while the cameras were moved to 9 equidistant positions forming an approximate semi-circle around the participant.

    TD_NIR_A: The images were captured using a quad camera (an array of 4 night vision cameras). The l...

  7. d

    Mission Statement, Vision and Core Values

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office of Inspector General (2020). Mission Statement, Vision and Core Values [Dataset]. https://catalog.data.gov/dataset/mission-statement-vision-and-core-values
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    Office of Inspector General
    Description

    Mission statement, Vision and Core Values of the OIG. Includes links to the OIG FY 2015 Action Plan, OIG Status Report on NAPA Recommendations, 2012 and the OIG Organizational Assessment, National Academy of Public Administration (NAPA), 2009

  8. l

    CollisionProfiles

    • visionzero.geohub.lacity.org
    • geohub.lacity.org
    • +1more
    Updated Aug 26, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Los Angeles Department of Transportation (2016). CollisionProfiles [Dataset]. https://visionzero.geohub.lacity.org/datasets/ladot::collisionprofiles
    Explore at:
    Dataset updated
    Aug 26, 2016
    Dataset authored and provided by
    Los Angeles Department of Transportation
    Area covered
    Description

    The Los Angeles Department of Transportation developed a transportation and health database that includes all collisions in the most recently available five-year period, as well as key environmental variables. These data, currently available on the City’s GeoHub, will be continually be updated as new information becomes available. The purpose for developing it was 1. to help the City identify a list of prioritized locations along the High Injury Network (HIN) for the development of safety projects and 2. to develop "countermeasure pairing," the process of identifying the physical design and engineering countermeasures that would most effectively address each "collision profile," a group of collisions with similar contributing factors. The Vision Zero Los Angeles initiative used a hierarchical clustering to develop these LA-specific collision profiles, intersection profile counts, and collision intersection scores. The data found here is a result of that work and will be used in the development of our Action Plan. Please reference the Vision Zero GIS Data Dictionary.pdf for key field names and descriptions.

  9. Patient Eye Prescription Dataset

    • kaggle.com
    zip
    Updated Feb 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamed.badr (2025). Patient Eye Prescription Dataset [Dataset]. https://www.kaggle.com/datasets/mohamedbadr222/eye-prescription-data-set/data
    Explore at:
    zip(20147 bytes)Available download formats
    Dataset updated
    Feb 13, 2025
    Authors
    Mohamed.badr
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Description: This dataset contains the eye prescription details of 1000 patients, including their names, ages, and optical prescription parameters for both the right and left eyes. The dataset is designed for optometry research, machine learning applications in ophthalmology, and statistical analysis of vision-related data.

    Dataset Features: Name: The full name of the patient. Age: The age of the patient (ranging from young adults to elderly individuals). SPH (Spherical): Measures the lens power needed to correct nearsightedness (-) or farsightedness (+). CYL (Cylindrical): Measures the degree of astigmatism, if present. Axis: The orientation of astigmatism correction in degrees (0-180°). Right Eye (SPH, CYL, Axis): Prescription details for the right eye. Left Eye (SPH, CYL, Axis): Prescription details for the left eye. This dataset can be used to analyze trends in vision impairments, develop predictive models for vision correction, and study the distribution of refractive errors in a population.

  10. Utility values according to glaucoma subtype, better eye cup-to-disc ratio,...

    • plos.figshare.com
    xls
    Updated Jun 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seulggie Choi; Jin A. Choi; Jin Woo Kwon; Sang Min Park; Donghyun Jee (2023). Utility values according to glaucoma subtype, better eye cup-to-disc ratio, FDT score, and vision loss. [Dataset]. http://doi.org/10.1371/journal.pone.0197581.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 6, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Seulggie Choi; Jin A. Choi; Jin Woo Kwon; Sang Min Park; Donghyun Jee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Utility values according to glaucoma subtype, better eye cup-to-disc ratio, FDT score, and vision loss.

  11. c

    Vision Europe Retail & In-Store Sales Data | Austria, France, Germany,...

    • dataproducts.consumeredge.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Consumer Edge, Vision Europe Retail & In-Store Sales Data | Austria, France, Germany, Italy, Spain, UK | 6.7M Accounts, 5K Merchants, 600 Companies [Dataset]. https://dataproducts.consumeredge.com/products/consumer-edge-vision-eur-aggregated-consumer-transaction-da-consumer-edge
    Explore at:
    Dataset authored and provided by
    Consumer Edge
    Area covered
    Austria, France, United Kingdom, Italy, Spain, Germany
    Description

    CE Vision Europe is a merchant attributable transaction data set tracking credit, debit, direct debit, and direct transfer consumer spend in Austria, France, Germany, Italy, Spain, and the UK. Track market share, customer insights, retail shopping patterns by demo and geo, and market dynamics.

  12. e

    Vision Llc Export Import Data | Eximpedia

    • eximpedia.app
    Updated Sep 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Vision Llc Export Import Data | Eximpedia [Dataset]. https://www.eximpedia.app/companies/vision-llc/97361290
    Explore at:
    Dataset updated
    Sep 13, 2025
    Description

    Vision Llc Export Import Data. Follow the Eximpedia platform for HS code, importer-exporter records, and customs shipment details.

  13. Vision for effective and streamlined reporting in the Pacific

    • kiribati-data.sprep.org
    • cookislands-data.sprep.org
    • +13more
    pdf
    Updated Feb 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Secretariat of the Pacific Regional Environment Programme (2025). Vision for effective and streamlined reporting in the Pacific [Dataset]. https://kiribati-data.sprep.org/dataset/vision-effective-and-streamlined-reporting-pacific
    Explore at:
    pdf(17713361)Available download formats
    Dataset updated
    Feb 20, 2025
    Dataset provided by
    Pacific Regional Environment Programmehttps://www.sprep.org/
    License

    Public Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
    License information was derived automatically

    Area covered
    143.18359136581 -9.4924081537655)), 142.08985090256 0.3076157096439, POLYGON ((141.93359613419 -2.6601813311133, 163.80858778954 -23.102194238624, 212.87109375 -14.836252128831, 207.87109136581 0.93257362376067, Pacific Region
    Description

    A workshop was jointly convened by the Pacificc Islands Forum Secretariat (PIFS) and SPREP in March 2012 in Fiji to provide a vision for more effective and streamlined reporting in the Pacific region.

  14. s

    Roush Import Data | Vision Wheels Inc

    • seair.co.in
    Updated Mar 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim Solutions (2024). Roush Import Data | Vision Wheels Inc [Dataset]. https://www.seair.co.in/us-import/product-roush/i-vision-wheels-inc.aspx
    Explore at:
    .text/.csv/.xml/.xls/.binAvailable download formats
    Dataset updated
    Mar 24, 2024
    Dataset authored and provided by
    Seair Exim Solutions
    Description

    Explore detailed Roush import data of Vision Wheels Inc in the USA—product details, price, quantity, origin countries, and US ports.

  15. o

    Spiral Vision Road Cross Street Data in Elizabethton, TN

    • ownerly.com
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ownerly (2025). Spiral Vision Road Cross Street Data in Elizabethton, TN [Dataset]. https://www.ownerly.com/tn/elizabethton/spiral-vision-rd-home-details?sort_by=market_total_value&sort=desc
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset authored and provided by
    Ownerly
    Area covered
    Elizabethton, Spiral Vision Road, Tennessee
    Description

    This dataset provides information about the number of properties, residents, and average property values for Spiral Vision Road cross streets in Elizabethton, TN.

  16. RAND Center for Population Health and Health Disparities (CPHHD) Data Core...

    • icpsr.umich.edu
    ascii, delimited, sas +2
    Updated May 13, 2011
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Escarce, Jose J.; Lurie, Nicole; Jewell, Adria (2011). RAND Center for Population Health and Health Disparities (CPHHD) Data Core Series: Disability, 2000 [United States] [Dataset]. http://doi.org/10.3886/ICPSR27862.v1
    Explore at:
    sas, spss, stata, delimited, asciiAvailable download formats
    Dataset updated
    May 13, 2011
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Escarce, Jose J.; Lurie, Nicole; Jewell, Adria
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/27862/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/27862/terms

    Time period covered
    2000
    Area covered
    Alabama, Georgia, Oklahoma, Illinois, Ohio, Massachusetts, District of Columbia, Wisconsin, Virginia, New York
    Description

    The RAND Center for Population Health and Health Disparities (CPHHD) Data Core Series is composed of a wide selection of analytical measures, encompassing a variety of domains, all derived from a number of disparate data sources. The CPHHD Data Core's central focus is on geographic measures for census tracts, counties, and Metropolitan Statistical Areas (MSAs) from two distinct geo-reference points, 1990 and 2000. The current study, Disability, contains cross-sectional data from the year 2000. Based on the Decennial Census Special Table Series published by the Administration on Aging, this study contains a large number of disability measures categorized by age (55+), type of disability (sensory, learning, employment, and self-care), and poverty status.

  17. CIFAR10-DVS

    • search.datacite.org
    • figshare.com
    Updated May 22, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongmin Li (2017). CIFAR10-DVS [Dataset]. http://doi.org/10.6084/m9.figshare.4724671
    Explore at:
    Dataset updated
    May 22, 2017
    Dataset provided by
    DataCitehttps://www.datacite.org/
    Figsharehttp://figshare.com/
    Authors
    Hongmin Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This folder contains the neuromorphic vision dataset named as 'CIFAR10-DVS' obtained by displaying the moving images of the CIFAR-10 dataset (http://www.cs.toronto.edu/~kriz/cifar.html) on a LCD monitor. The dataset is used for event-driven scene classification and pattern recognition. These recordings can be displayed using the jAER software (http://sourceforge.net/p/jaer/wiki/Home) using filters DVS128.
    The files "dat2mat.m" and "mat2dat.m" in (http://www2.imse-cnm.csic.es/caviar/MNIST_DVS/) can be used to transfer lists of events between jAER format (.dat or .aedat) and matlab.
    Please cite it if you intend to use this dataset. Li H, Liu H, Ji X, Li G and Shi L (2017) CIFAR10-DVS: An Event-Stream Dataset for Object Classification. Front. Neurosci. 11:309. doi: 10.3389/fnins.2017.00309


    The high-sensitivity DVS used in the recording reported in:P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor,” IEEE J. Solid-State Circuits, vol. 43, no. 2, pp. 566–576, Feb. 2008
    A single 128x128 pixel DVS sensor was placed in front of a 24" LCD monitor. Images of CIFAR-10 were upscaled to 512 * 512 through bicubic interpolation, and displayed on the LCD monitor with circulating smooth movement. A total of 10,000 event-stream recordings in 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck) with 1000 recordings per classes were obtained.

  18. H

    A GeoHealth vision of RAPID data and networks that keep working - even if...

    • hydroshare.org
    • beta.hydroshare.org
    zip
    Updated Feb 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christina Norton; Chris Lenhardt; Jill Falman; Elaine Faustman (2023). A GeoHealth vision of RAPID data and networks that keep working - even if the lights go out [Dataset]. https://www.hydroshare.org/resource/03e4b517ac1b413185790eac99208e12
    Explore at:
    zip(27.6 MB)Available download formats
    Dataset updated
    Feb 28, 2023
    Dataset provided by
    HydroShare
    Authors
    Christina Norton; Chris Lenhardt; Jill Falman; Elaine Faustman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Sep 25, 2017 - Feb 16, 2023
    Area covered
    Description

    Panel Presentation on February 17, 2023; San Juan Puerto Rico 14th CECIA-IAUPR Biennial Symposium on Potable Water Issues in Puerto Rico: Science, Technology and Regulation

    Presenters: Dr. Christina Norton, University of Washington Christopher Lenhardt, RENCI, University of North Carolina at Chapel Hill Dr. Elaine Faustman, University of Washington Jill Falman, University of Washington

  19. w

    City of Seattle

    • data.wu.ac.at
    csv, json, rdf, xml
    Updated Mar 13, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Seattle (2018). City of Seattle [Dataset]. https://data.wu.ac.at/schema/data_gov/ZjliMTUxNzEtZmY1Ny00NzY3LWFjMWEtZWU3M2Q3YTk0MzJk
    Explore at:
    xml, csv, rdf, jsonAvailable download formats
    Dataset updated
    Mar 13, 2018
    Dataset provided by
    City of Seattle
    Area covered
    Seattle
    Description

    This dataset will be moving! The City is working on a new Open Data Portal for GIS data. This dataset will soon be available at https://data-seattlecitygis.opendata.arcgis.com/. We apologize for any inconvenience, but this new platform will allow us to regularly update our data and provided better tools for our spatial data. https://gisrevprxy.seattle.gov/arcgis/rest/services/SDOT_EXT/DSG_datasharing/MapServer/68

  20. Tract

    • hub.arcgis.com
    Updated Nov 16, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2020). Tract [Dataset]. https://hub.arcgis.com/datasets/esri::tract-9?uiVersion=content-views
    Explore at:
    Dataset updated
    Nov 16, 2020
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    This layer shows six different types of disability. This is shown by tract, county, and state boundaries. This service is updated annually to contain the most currently released American Community Survey (ACS) 5-year data, and contains estimates and margins of error. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. This layer is symbolized to show the percent of population with a disability. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Current Vintage: 2019-2023ACS Table(s): B18101, B18102, B18103, B18104, B18105, B18106, B18107, C18108 (Not all lines of these ACS tables are available in this feature layer.)Data downloaded from: Census Bureau's API for American Community Survey Date of API call: December 12, 2024National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Click here to learn more about ACS data releases.Boundaries come from the US Census TIGER geodatabases, specifically, the National Sub-State Geography Database (named tlgdb_(year)_a_us_substategeo.gdb). Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines erased for cartographic and mapping purposes. For census tracts, the water cutouts are derived from a subset of the 2020 Areal Hydrography boundaries offered by TIGER. Water bodies and rivers which are 50 million square meters or larger (mid to large sized water bodies) are erased from the tract level boundaries, as well as additional important features. For state and county boundaries, the water and coastlines are derived from the coastlines of the 2023 500k TIGER Cartographic Boundary Shapefiles. These are erased to more accurately portray the coastlines and Great Lakes. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters).The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
YufeiZhan (2025). Vision-R1-Data [Dataset]. https://huggingface.co/datasets/JefferyZhan/Vision-R1-Data

Vision-R1-Data

JefferyZhan/Vision-R1-Data

Explore at:
Dataset updated
Jun 3, 2025
Authors
YufeiZhan
License

Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically

Description

This repository contains the dataset used in the paper Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models via Vision-Guided Reinforcement Learning. Code: https://github.com/jefferyZhan/Griffon

Search
Clear search
Close search
Google apps
Main menu