6 datasets found
  1. R

    Unity Eyes Dataset

    • universe.roboflow.com
    zip
    Updated Oct 31, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unity (2021). Unity Eyes Dataset [Dataset]. https://universe.roboflow.com/unity-mm07d/unity-eyes
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 31, 2021
    Dataset authored and provided by
    Unity
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Eyes Bounding Boxes
    Description

    Unity Eyes

    ## Overview
    
    Unity Eyes is a dataset for object detection tasks - it contains Eyes annotations for 9,000 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  2. D

    Replication Data for: Been There, Seen That: Visualization of Movement and...

    • darus.uni-stuttgart.de
    Updated Jul 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nelusa Pathmanathan; Seyda Öney; Michael Becher; Michael Sedlmair; Daniel Weiskopf; Kuno Kurzhals (2023). Replication Data for: Been There, Seen That: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments [Dataset]. http://doi.org/10.18419/DARUS-3383
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 11, 2023
    Dataset provided by
    DaRUS
    Authors
    Nelusa Pathmanathan; Seyda Öney; Michael Becher; Michael Sedlmair; Daniel Weiskopf; Kuno Kurzhals
    License

    https://spdx.org/licenses/MIT.htmlhttps://spdx.org/licenses/MIT.html

    Dataset funded by
    DFG
    Description

    The file contains a Unity project, which allows to test the desktop-based visualization techniques introduced in the Paper "Been there, Seen that: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments". It allows you to analyze 3D gaze and movement data sets recorded using the HoloLens 2. It provides you with a gaze replay visualization, which is linked to a space-time cube visualization to show an overview of the behavioral data and inspect important events in more detail. The project includes a folder called Assets, which contains the necessary scripts and data. The project can be opened using Unity. We recommend Unity version 2020.3.24f. The scenes GazeReplay and STC need to be dragged into the hierarchy window and the GazeReplay scene must be unloaded. Afterward, the visualization can be viewed and tested within the game view by hitting the play button. . └── MyScripts └── General |── ButtonFunctionalities # the code for UI elements |── ReadData # the code for loading the data |── Trajectory # visualizes movement within space-time cube(STC) |── StackedHeatMap # visualizes cube heatmap within STC |── HeatmapWall # visualizes heatmap within gaze replay |── ReplayManager_General # visualizes participants within gaze replay └── Resources └── CSVFiles └──AnchorFile #contains the files needed to transform the data into one coordinate system └──GazeData #contains the recorded gaze data of the participants └── Scenes |── GazeReplay # Scene for gaze replay |── STC # Scene for STC Please check the GitHub page for the latest version.

  3. Unity project for VR user study on adaptive mobile indoor route guidance

    • zenodo.org
    zip
    Updated Apr 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laure De Cock; Laure De Cock (2022). Unity project for VR user study on adaptive mobile indoor route guidance [Dataset]. http://doi.org/10.1080/13658816.2022.2032080
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 21, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Laure De Cock; Laure De Cock
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This unity project was used to conduct a user study in VR that investigates the link between the cognitive load induced by route instruction types and building configuration during indoor route guidance. In the project a 3D model can be found of a fictive building, 10 scenes for 10 different routes in this building, 3 types of route instructions for these 10 routes, code to conduct the experiment and track eye movements and the location of participants during the experiment.

  4. z

    CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation...

    • zenodo.org
    zip
    Updated Feb 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tong Xue; Tong Xue; Abdallah El Ali; Abdallah El Ali; Tianyi Zhang; Gangyi Ding; Pablo Cesar; Tianyi Zhang; Gangyi Ding; Pablo Cesar (2022). CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360 VR Videos [Dataset]. http://doi.org/10.5281/zenodo.6136884
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 23, 2022
    Dataset provided by
    Zenodo
    Authors
    Tong Xue; Tong Xue; Abdallah El Ali; Abdallah El Ali; Tianyi Zhang; Gangyi Ding; Pablo Cesar; Tianyi Zhang; Gangyi Ding; Pablo Cesar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360° Videos


    ## General Information
    We develop the CEAP-360VR dataset to address the lack of continuously annotated behavioral and physiological datasets for 360 video VR affective computing. Accordingly, this dataset contains a) questionnaires (SSQ, IPQ, NASA-TLX); b) continuous valence-arousal annotations; c) head and eye movements as well as left and right eye pupil diameters while watching videos; d) peripheral physiological responses (ACC, EDA, SKT, BVP, HR, IBI). Our dataset also concludes the data pre-processing, data validating scripts, along with dataset description and key steps in the stage of data acquisition and pre-processing.


    ## Dataset Structure
    The CEAP-360VR folder contains the following six subfolders

    1_Stimuli
    2_QuestionnaireData
    3_AnnotationData
    4_BehaviorData
    5_PhysioData
    6_Scripts
    The following is a detailed description of each sub-file:

    1_Stimuli

    - VideoThumbNails
    contains the eight thumbNails for each video (.jpg)
    - VideoInfo.json
    contains the detailed information for eight videos

    2_QuestionnaireData

    - PXX_Questionnaire_Data.json (X = 1, 2, ..., 32)
    contains questionnaire data for each participant

    3_AnnotationData

    - Raw
    contains the raw annotation data captured from the Joy-Con joystick for each participant
    - Transformed
    contains the transformed valence-arousal data generated from the raw data for each participant
    - Frame
    contains the re-sampled annotation data from the transformed data for each participant

    4_BehaviorData

    - Raw
    contains the raw behavior data captured from the HTC VIVE Pro Eye Tobii Device for each participant
    - Transformed
    contains the transformed heam/eye movement data (pitch/yaw) generated from the raw data, as well as pupil diameter data for each participant
    - Frame
    contains the re-sampled behavior data generated from the transformed data for each participant
    - HM_ScanPath
    contains the head scanpath data generated from the transformed data for each participant
    - EM_Fixation
    contains the eye gaze fixation data generated from the transformed data for each participant

    5_PhysioData

    - Raw
    contains the raw physiological data captured from the Empatica E4 wristband for each participant
    - Transformed
    contains the transformed physiological data generated from the raw data for each participant
    - Frame
    contains the re-sampled physiological data from the transformed data for each participant

    6_Scripts

    - Unity Project
    contains the complete project of our user-controlled experiment (Unity 2018.4.1f1, HTC VIVE Pro Eye HMD)
    - Data Processed
    contains scripts that undertake the pre-processing steps for converting the raw data to the transformed/frame data in the transformed and frame folders.
    conatins scripts for continuous annotation, behavior and physiological data analysis and visualization.
    - CEAP-360VR_Baseline
    contains scripts to generate processed behavioral and physiological data with V-A labels for deep learning experiments and features for machine learning experiments.
    contains scripts to run ML and DL experiments under both subject-dependent and subject-independent model.


    ## Dataset Description
    The CEAP-360VR Dataset [Description.pdf](https://github.com/cwi-dis/CEAP-360VR-Dataset/blob/master/CEAP-Dataset%20Description.pdf) introduces the dataset description and key steps in the stage of data acquisition and pre-processing.


    ## Dataset License
    CEAP-360VR dataset is licensed under a [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license].

    ## Citation

    Please cite our paper in any published work that uses this dataset as follows:
    - Plain Text
    T. Xue, A. El Ali, T. Zhang, G. Ding, and P. Cesar, "CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360° Videos," in IEEE Transactions on Multimedia, doi: 10.1109/TMM.2021.3124080.

    - BibTex
    @ARTICLE{Xue2021CEAP-360VR,
    author={Xue, Tong and Ali, Abdallah El and Zhang, Tianyi and Ding, Gangyi and Cesar, Pablo},
    journal={IEEE Transactions on Multimedia},
    title={CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360° Videos},
    year={2021},
    volume={},
    number={},
    pages={1-1},
    doi={https://doi.org/10.1109/TMM.2021.3124080}}


    ## Usage

    1. We have performed the time alignment of different types of data and
    videos for each participant, as well as the proceesing scripts that
    can be used to generate both the transformed and frame data.
    Researchers can run their analysis methods on them.

    2. For researchers who want to try other data processing methods, you can directly use the raw data.


    ## About
    The CEAP-360VR Dataset is maintained by Key Laboratory of Digital Performance and Simulation Technology at Beijing Institute of Technology and the Distributed & Interactive Systems (DIS) research group at Centrum Wiskunde & Informatica .

    Contact the authors
    - Tong Xue: xuetong@bit.edu.cn, xue.tong@cwi.nl
    - Abdallah El Ali: abdallah.el.ali@cwi.nl

  5. Data from: Unveiling Variations: A Comparative Study of VR Headsets...

    • zenodo.org
    bin, zip
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Baosheng James HOU; Baosheng James HOU; Yasmeen Abdrabou; Yasmeen Abdrabou; Florian Weidner; Florian Weidner; Hans Gellersen; Hans Gellersen (2024). Unveiling Variations: A Comparative Study of VR Headsets Regarding Eye Tracking Volume, Gaze Accuracy, and Precision [Dataset]. http://doi.org/10.5281/zenodo.10498170
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Apr 24, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Baosheng James HOU; Baosheng James HOU; Yasmeen Abdrabou; Yasmeen Abdrabou; Florian Weidner; Florian Weidner; Hans Gellersen; Hans Gellersen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 16, 2024
    Description

    This repository contains the supplementary material to the paper "Unveiling Variations: A Comparative Study of VR Headsets Regarding Eye Tracking Volume, Gaze Accuracy, and Precision".

    Functions for converting between Fick angles, 3D vectors, and visual angles are authored by Per Baekgaard, available at https://github.com/baekgaard/fickpy

    In detail:

    - Dataset

    • Analysis scripts

    - The Unity application:

    • testHTCTobiiPro - TobiiPro licence is not included in the upload
    • testViveProEye_sranipal
    • testAndroid
      • scene: eval_viveFocus3, when building apk, use only Wave as xr provider
      • scene: eval_metaQuestPro, Meta Quest Pro standalone
      • scene: eval_metaQuestPro_pcVr, Meta Quest Pro tethered
  6. C

    Computer Progressive Lenses Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jan 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Computer Progressive Lenses Report [Dataset]. https://www.datainsightsmarket.com/reports/computer-progressive-lenses-421930
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Jan 24, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global computer progressive lenses market is projected to reach USD 23.5 billion by 2033, exhibiting a CAGR of 7.2% during the forecast period 2025-2033. The market is driven by the rising prevalence of presbyopia, increasing adoption of advanced vision correction procedures, and growing popularity of online eyewear retailers. The rising awareness about eye care and the increasing availability of progressive lenses in various designs and materials are also contributing to the market growth. North America and Europe are expected to be the largest markets for computer progressive lenses due to the high prevalence of presbyopia and well-developed healthcare systems. Asia-Pacific is expected to witness significant growth over the forecast period owing to the increasing disposable income, growing population, and rising awareness about eye care. Key players in the computer progressive lenses market include Essilor, Nikon, Zeiss, Seiko, Shamir, Rodenstock, HOYA, Kodak, Specsavers, Caledonian Optical, Unity Lenses, Conant, VISION-EASE LENS, and Wanxin Lens. These companies are focusing on product innovation, strategic partnerships, and geographical expansion to maintain their position in the market. Computer progressive lenses are a type of corrective eyewear that provides clear vision at all distances, making them ideal for people who spend a lot of time working on computers or other electronic devices. The global computer progressive lenses market is expected to reach USD 10.2 billion by 2028, growing at a CAGR of 4.5% from 2021 to 2028. [Website Link]

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Unity (2021). Unity Eyes Dataset [Dataset]. https://universe.roboflow.com/unity-mm07d/unity-eyes

Unity Eyes Dataset

unity-eyes

unity-eyes-dataset

Explore at:
245 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Oct 31, 2021
Dataset authored and provided by
Unity
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Eyes Bounding Boxes
Description

Unity Eyes

## Overview

Unity Eyes is a dataset for object detection tasks - it contains Eyes annotations for 9,000 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Search
Clear search
Close search
Google apps
Main menu