Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Unity Eyes is a dataset for object detection tasks - it contains Eyes annotations for 9,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://spdx.org/licenses/MIT.htmlhttps://spdx.org/licenses/MIT.html
The file contains a Unity project, which allows to test the desktop-based visualization techniques introduced in the Paper "Been there, Seen that: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments". It allows you to analyze 3D gaze and movement data sets recorded using the HoloLens 2. It provides you with a gaze replay visualization, which is linked to a space-time cube visualization to show an overview of the behavioral data and inspect important events in more detail. The project includes a folder called Assets, which contains the necessary scripts and data. The project can be opened using Unity. We recommend Unity version 2020.3.24f. The scenes GazeReplay and STC need to be dragged into the hierarchy window and the GazeReplay scene must be unloaded. Afterward, the visualization can be viewed and tested within the game view by hitting the play button. . └── MyScripts └── General |── ButtonFunctionalities # the code for UI elements |── ReadData # the code for loading the data |── Trajectory # visualizes movement within space-time cube(STC) |── StackedHeatMap # visualizes cube heatmap within STC |── HeatmapWall # visualizes heatmap within gaze replay |── ReplayManager_General # visualizes participants within gaze replay └── Resources └── CSVFiles └──AnchorFile #contains the files needed to transform the data into one coordinate system └──GazeData #contains the recorded gaze data of the participants └── Scenes |── GazeReplay # Scene for gaze replay |── STC # Scene for STC Please check the GitHub page for the latest version.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This unity project was used to conduct a user study in VR that investigates the link between the cognitive load induced by route instruction types and building configuration during indoor route guidance. In the project a 3D model can be found of a fictive building, 10 scenes for 10 different routes in this building, 3 types of route instructions for these 10 routes, code to conduct the experiment and track eye movements and the location of participants during the experiment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360° Videos
## General Information
We develop the CEAP-360VR dataset to address the lack of continuously annotated behavioral and physiological datasets for 360 video VR affective computing. Accordingly, this dataset contains a) questionnaires (SSQ, IPQ, NASA-TLX); b) continuous valence-arousal annotations; c) head and eye movements as well as left and right eye pupil diameters while watching videos; d) peripheral physiological responses (ACC, EDA, SKT, BVP, HR, IBI). Our dataset also concludes the data pre-processing, data validating scripts, along with dataset description and key steps in the stage of data acquisition and pre-processing.
## Dataset Structure
The CEAP-360VR folder contains the following six subfolders
1_Stimuli
2_QuestionnaireData
3_AnnotationData
4_BehaviorData
5_PhysioData
6_Scripts
The following is a detailed description of each sub-file:
1_Stimuli
- VideoThumbNails
contains the eight thumbNails for each video (.jpg)
- VideoInfo.json
contains the detailed information for eight videos
2_QuestionnaireData
- PXX_Questionnaire_Data.json (X = 1, 2, ..., 32)
contains questionnaire data for each participant
3_AnnotationData
- Raw
contains the raw annotation data captured from the Joy-Con joystick for each participant
- Transformed
contains the transformed valence-arousal data generated from the raw data for each participant
- Frame
contains the re-sampled annotation data from the transformed data for each participant
4_BehaviorData
- Raw
contains the raw behavior data captured from the HTC VIVE Pro Eye Tobii Device for each participant
- Transformed
contains the transformed heam/eye movement data (pitch/yaw) generated from the raw data, as well as pupil diameter data for each participant
- Frame
contains the re-sampled behavior data generated from the transformed data for each participant
- HM_ScanPath
contains the head scanpath data generated from the transformed data for each participant
- EM_Fixation
contains the eye gaze fixation data generated from the transformed data for each participant
5_PhysioData
- Raw
contains the raw physiological data captured from the Empatica E4 wristband for each participant
- Transformed
contains the transformed physiological data generated from the raw data for each participant
- Frame
contains the re-sampled physiological data from the transformed data for each participant
6_Scripts
- Unity Project
contains the complete project of our user-controlled experiment (Unity 2018.4.1f1, HTC VIVE Pro Eye HMD)
- Data Processed
contains scripts that undertake the pre-processing steps for converting the raw data to the transformed/frame data in the transformed and frame folders.
conatins scripts for continuous annotation, behavior and physiological data analysis and visualization.
- CEAP-360VR_Baseline
contains scripts to generate processed behavioral and physiological data with V-A labels for deep learning experiments and features for machine learning experiments.
contains scripts to run ML and DL experiments under both subject-dependent and subject-independent model.
## Dataset Description
The CEAP-360VR Dataset [Description.pdf](https://github.com/cwi-dis/CEAP-360VR-Dataset/blob/master/CEAP-Dataset%20Description.pdf) introduces the dataset description and key steps in the stage of data acquisition and pre-processing.
## Dataset License
CEAP-360VR dataset is licensed under a [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license].
## Citation
Please cite our paper in any published work that uses this dataset as follows:
- Plain Text
T. Xue, A. El Ali, T. Zhang, G. Ding, and P. Cesar, "CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360° Videos," in IEEE Transactions on Multimedia, doi: 10.1109/TMM.2021.3124080.
- BibTex
@ARTICLE{Xue2021CEAP-360VR,
author={Xue, Tong and Ali, Abdallah El and Zhang, Tianyi and Ding, Gangyi and Cesar, Pablo},
journal={IEEE Transactions on Multimedia},
title={CEAP-360VR: A Continuous Physiological and Behavioral Emotion Annotation Dataset for 360° Videos},
year={2021},
volume={},
number={},
pages={1-1},
doi={https://doi.org/10.1109/TMM.2021.3124080}}
## Usage
1. We have performed the time alignment of different types of data and
videos for each participant, as well as the proceesing scripts that
can be used to generate both the transformed and frame data.
Researchers can run their analysis methods on them.
2. For researchers who want to try other data processing methods, you can directly use the raw data.
## About
The CEAP-360VR Dataset is maintained by Key Laboratory of Digital Performance and Simulation Technology at Beijing Institute of Technology and the Distributed & Interactive Systems (DIS) research group at Centrum Wiskunde & Informatica .
Contact the authors
- Tong Xue: xuetong@bit.edu.cn, xue.tong@cwi.nl
- Abdallah El Ali: abdallah.el.ali@cwi.nl
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the supplementary material to the paper "Unveiling Variations: A Comparative Study of VR Headsets Regarding Eye Tracking Volume, Gaze Accuracy, and Precision".
Functions for converting between Fick angles, 3D vectors, and visual angles are authored by Per Baekgaard, available at https://github.com/baekgaard/fickpy
In detail:
- Dataset
- The Unity application:
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The global computer progressive lenses market is projected to reach USD 23.5 billion by 2033, exhibiting a CAGR of 7.2% during the forecast period 2025-2033. The market is driven by the rising prevalence of presbyopia, increasing adoption of advanced vision correction procedures, and growing popularity of online eyewear retailers. The rising awareness about eye care and the increasing availability of progressive lenses in various designs and materials are also contributing to the market growth. North America and Europe are expected to be the largest markets for computer progressive lenses due to the high prevalence of presbyopia and well-developed healthcare systems. Asia-Pacific is expected to witness significant growth over the forecast period owing to the increasing disposable income, growing population, and rising awareness about eye care. Key players in the computer progressive lenses market include Essilor, Nikon, Zeiss, Seiko, Shamir, Rodenstock, HOYA, Kodak, Specsavers, Caledonian Optical, Unity Lenses, Conant, VISION-EASE LENS, and Wanxin Lens. These companies are focusing on product innovation, strategic partnerships, and geographical expansion to maintain their position in the market. Computer progressive lenses are a type of corrective eyewear that provides clear vision at all distances, making them ideal for people who spend a lot of time working on computers or other electronic devices. The global computer progressive lenses market is expected to reach USD 10.2 billion by 2028, growing at a CAGR of 4.5% from 2021 to 2028. [Website Link]
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Unity Eyes is a dataset for object detection tasks - it contains Eyes annotations for 9,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).