100+ datasets found
  1. i

    Data from: ETFP (Eye-Tracking and Fixation Points)

    • ieee-dataport.org
    Updated Mar 19, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessandro Bruno (2021). ETFP (Eye-Tracking and Fixation Points) [Dataset]. http://doi.org/10.21227/0d1h-vb68
    Explore at:
    Dataset updated
    Mar 19, 2021
    Dataset provided by
    IEEE Dataport
    Authors
    Alessandro Bruno
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions. The primary purposes behind the two datasets are to study and analyse, respectively, colourblindness and object-attention. A brief description of the experimental sessions and settings for both EToCVD and ETTO is given down below.EToCVD: The experimental sessions for EToCVD involved eight subjects with a fully efficient colour vision perception and eight participants with a colour-deficient vision system. More precisely, three subjects were affected by deuteranopia, while the other five were affected by protanopia. We conducted two experimental eye-tracking sessions: the first was focused on detecting how different the fixation points among the two cohorts. The first one is needed to assess our method's effectiveness in enhancing the images for colour blind people. Both eye-tracking sessions consist of repeating the same procedures. The first session also includes a test with Ishihara plates to evaluate which kind of colour vision deficiency the subjects were affected.ETTO: The primary purpose of ETTO is to investigate the relationships between saliency and object visual attention processes. A computer showed each image at full resolution for a time frame of three seconds, separated by one second of viewing a grey screen. The database consists of several pictures with single objects in the foreground and a homogeneous coloured background region. ETTO has been used to assess saliency methods' effectiveness based on different computational and perceptual approaches concerning the object attention process. The experimental sessions have been conducted in a half-light room. The participants were kept almost 70 cm off a 22-inch monitor having a spatial resolution of 1,920 by 1,080 pixels. During the eye-tracking session, a Tobii EyeX device recorded the eye movements, the saccadic movements, and the scan paths of each subject while looking at the images projected on the screen. For each subject, a calibration step was needed, in order, to minimise saccadic movement tracking errors, to compute and assess the geometry of the setup (e.g., screen size, distance, etc.), and to collect measurements of light refractions and reflection properties of the corneas of each subject. Rather than using the standard Tobii EyeX Engine calibration (nine-point calibration step), we used Tobii MATLAB Toolbox 3.1 calibration, whose procedure relies on a set of 13 points. Viewers were shown each image for 3 seconds, while Tobii EyeX acquired the eye movements' spatial coordinates. The eye-tracker collected, on average, 160 spatial coordinates per 3 seconds because of the frequency rate of 55 Hz). Before switching to the next image, the screen turned grey for 1 second to refresh the observer retina from the previous image signal.

  2. P

    GazeCapture Dataset

    • paperswithcode.com
    Updated Jun 13, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kyle Krafka; Aditya Khosla; Petr Kellnhofer; Harini Kannan; Suchendra Bhandarkar; Wojciech Matusik; Antonio Torralba (2016). GazeCapture Dataset [Dataset]. https://paperswithcode.com/dataset/gazecapture
    Explore at:
    Dataset updated
    Jun 13, 2016
    Authors
    Kyle Krafka; Aditya Khosla; Petr Kellnhofer; Harini Kannan; Suchendra Bhandarkar; Wojciech Matusik; Antonio Torralba
    Description

    From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost $2.5M$ frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10 - 15fps) on a modern mobile device. Our model achieves a prediction error of 1.7cm and 2.5cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.3cm and 2.1cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results.

  3. i

    Using Eye-Tracking Data - Dataset (cleaned, N = 44)

    • ieee-dataport.org
    • test.ieee-dataport.org
    Updated Jan 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sasha Willis (2022). Using Eye-Tracking Data - Dataset (cleaned, N = 44) [Dataset]. http://doi.org/10.21227/yj5g-2w72
    Explore at:
    Dataset updated
    Jan 13, 2022
    Dataset provided by
    IEEE Dataport
    Authors
    Sasha Willis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Efficient evaluation strategies are essential when reviewing computer code for trustworthiness and potential reuse. Previous researchers have examined factors that influence these assessments, and the HSMC proposes two information processing strategies to explain this process: heuristic and systematic processing. However, researchers have yet to empirically demonstrate the direct influence of the specific factors that affect cognitive effort, which can be inferred through eye-tracking metrics. Programmers (N = 52) were recruited to complete a Java code review task. We manipulated the source, readability, and organization of a single code piece to varying degrees and analyzed the effects of these factors on eye-tracking data (i.e., fixation count, average fixation duration, total fixation duration) and self-report data (i.e., perceived trustworthiness of the code, reuse intentions). Neither reuse intentions nor trustworthiness perceptions significantly differed across conditions. However, analyses of the eye-tracking data revealed increases in fixation counts and durations were present for code that was degraded, suggesting that more systematic processing was occurring in degraded code conditions compared to highly organized, highly readable code from a reputable source. An exploratory analysis of the AOIs containing readability and organization degradations revealed that misuse of case and misuse of declarations garnered the most attention from participants relative to the rest of the code piece. The implications of the current study extend to recommendations for writing code that is easily reusable by decreasing the cognitive effort needed for code review.

  4. Z

    DELANA - An eyetracking dataset from facilitating a series of laptop-based...

    • data.niaid.nih.gov
    • search.datacite.org
    Updated Jan 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sharma, Kshitij (2020). DELANA - An eyetracking dataset from facilitating a series of laptop-based lessons [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_16514
    Explore at:
    Dataset updated
    Jan 21, 2020
    Dataset provided by
    Sharma, Kshitij
    Prieto, Luis P.
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset contains eye-tracking data from a two subjects (an expert and a novice teachers), facilitating three collaborative learning lessons (2 for the expert, 1 for the novice) in a classroom with laptops and a projector, with real master-level students. These sessions were recorded during a course on the topic of digital education and learning analytics at [EPFL](http://epfl.ch).

    This dataset has been used in several scientific works, such as the [CSCL 2015](http://isls.org/cscl2015/) conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration

  5. Data from: Eye-Tracking Dataset to Support the Research on Autism Spectrum...

    • figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federica Cilia; Romuald Carette; Mahmoud Elbattah; Jean-Luc Guérin; Gilles Dequen (2023). Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder [Dataset]. http://doi.org/10.6084/m9.figshare.20113592.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Federica Cilia; Romuald Carette; Mahmoud Elbattah; Jean-Luc Guérin; Gilles Dequen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract: This study aims to publish an eye-tracking dataset developed for the purpose of autism diagnosis. Eye-tracking methods are used intensively in that context, whereas abnormalities of the eye gaze are largely recognised as the hallmark of autism. As such, it is believed that the dataset can allow for developing useful applications or discovering interesting insights. As well, Machine Learning is a potential application for developing diagnostic models that can help detect autism at an early stage of development.

    Dataset Description: The dataset is distributed over 25 CSV-formatted files. Each file represents the output of an eye-tracking experiment. However, a single experiment usually included multiple participants. The participant ID is clearly provided at each record at the ‘Participant’ column, which can be used to identify the class of participant (i.e., Typically Developing or ASD). Furthermore, a set of metadata files is included. The main metadata file, Participants.csv, is used to describe the key characteristics of participants (e.g. gender, age, CARS). Every participant was also assigned a unique ID.

    Dataset Citation: Cilia, F., Carette, R., Elbattah, M., Guérin, J., & Dequen, G. (2022). Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder. In Proceedings of the IJCAI–ECAI Workshop on Scarce Data in Artificial Intelligence for Healthcare (SDAIH).

  6. Behavioral and Eye-tracking Data for Adaptive Circuit Dynamics Across Human...

    • figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Murphy; Niklas Wilming; Diana Carolina Hernandez Bocanegra; Genis Prat Ortega; Tobias Donner (2023). Behavioral and Eye-tracking Data for Adaptive Circuit Dynamics Across Human Cortex During Evidence Accumulation in Changing Environments [Dataset]. http://doi.org/10.6084/m9.figshare.14035935.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Peter Murphy; Niklas Wilming; Diana Carolina Hernandez Bocanegra; Genis Prat Ortega; Tobias Donner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains behavioural and eye-tracking data for:Murphy PR, Wilming N, Hernandez Bocanegra DC, Prat Ortega G & Donner TH (2021). Adaptive circuit dynamics across human cortex during evidence accumulation in changing environments. Nature Neuroscience. Online ahead of print.

    Each .zip file contains all data for a single participant and is organized as follows: data from each experimental session are contained in their own folder (S1, S2, etc.); each session folder in turn contains separate Sample_seqs, Behaviour and Eyetracking subfolders.

    The Sample_seqs folder contains Matlab .mat files (labelled ID_SESS_BLOCK.mat, where ID is the participant ID, SESS is the experimental session and BLOCK is the block number within that session) with information about the trial-specific stimulus sequences presented to the participant. The variables in each of these files are:

    gen – structure containing the generative statistics of the task

    stim – structure containing details about the physical presentation of the stimuli (see task script on Donnerlab Github for explanation of these)

    timing – structure containing details about the timing of stimulus presentation (see task script on Donnerlab Github for explanation of these)

    pshort – proportion of trials with stimulus sequences that were shorter than the full sequence length

    stimIn – trials*samples matrix of stimulus locations (in polar angle with horizontal midline = 0 degrees; NaN marks trials sequences that were shorter than the full sequence length)

    distseqs – trials*samples matrix of which generative distribution was used to draw each sample location

    pswitch – trials*samples matrix of binary flags marking when a switch in generative distribution occurred

    The Behaviour folder contains Matlab .mat files (same naming scheme as above) with information about the behaviour produced by the participant on each trial of the task. The main variable in each file is a matrix called Behav for which each row is a trial and columns are the following:

    column 1 – the generative distribution used to draw the final sample location on each trial (and thus, the correct response)

    column 2 – the response given by the participant

    column 3 – the accuracy of the participant’s response

    column 4 – response time relative to Go cue

    column 5 – trial onset according to psychtoolbox clock

    column 6 – number of times participant broke fixation during trial, according to online detection algorithm

    Each .mat file also contains a trials*samples matrix (tRefresh) of the timings of monitor flips corresponding to the onsets of each sample (and made relative to trial onset), as provided by psychtoolbox.

    The Eyetracking folder contains both raw Eyelink 1000 (SR Research) .edf files, and their conversions to .asc text files using the manufacturer’s edf2asc utility (same naming scheme as above). For stimulus and response trigger information, see task scripts on Donnerlab Github..zip file names ending in _2.zip correspond to the four participants from Experiment 2 of the paper, for whom sample-onset-asynchrony (SOA) was manipulated across two conditions (0.2 vs 0.6 s). All other participants are from Experiment 1, where SOA was fixed at 0.4 s.For example code for analyzing behaviour, fitting behavioural models, and analyzing pupil data, see https://github.com/DonnerLab/2021_Murphy_Adaptive-Circuit-Dynamics-Across-Human-Cortex.

  7. T

    Eye Tracking System Market Forecast by Remote and Wearable Eye Tracking...

    • futuremarketinsights.com
    csv, pdf
    Updated Apr 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Future Market Insights (2024). Eye Tracking System Market Forecast by Remote and Wearable Eye Tracking Systems for 2024 to 2034 [Dataset]. https://www.futuremarketinsights.com/reports/eye-tracking-systems-market
    Explore at:
    csv, pdfAvailable download formats
    Dataset updated
    Apr 22, 2024
    Dataset authored and provided by
    Future Market Insights
    License

    https://www.futuremarketinsights.com/privacy-policyhttps://www.futuremarketinsights.com/privacy-policy

    Time period covered
    2024 - 2034
    Area covered
    Worldwide
    Description

    The eye tracking system market is envisioned to reach a value of US$ 1.90 billion in 2024 and register an incredible CAGR of 26.40% from 2024 to 2034. The market is foreseen to surpass US$ 19.76 billion by 2034. The emergence of vision capture technology services in retail, research, automotive, healthcare, and consumer electronics has immensely propelled the eye tracing system industry.

    AttributesDetails
    Market Value for 2024US$ 1.90 billion
    Market Value for 2034US$ 19.76 billion
    Market Forecast CAGR for 2024 to 203426.40%

    2019 to 2023 Historical Analysis vs. 2024 to 2034 Market Forecast Projection

    AttributesDetails
    Market Historical CAGR for 2019 to 202324.20%

    Category-wise Insights

    AttributesDetails
    Top System OrientationWearable Eye Tracking Systems
    Market share in 202444.2%
    AttributesDetails
    Top Sampling Rate61 to 120 Hz
    Market share in 202428.3%

    Country-wise Insights

    CountriesCAGR from 2024 to 2034
    United States23.20%
    Germany21.80%
    China26.90%
    Japan21.10%
    Australia29.90%
  8. h

    mobile-eye-tracking-dataset-v2

    • huggingface.co
    Updated Aug 12, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julien Mercier (2023). mobile-eye-tracking-dataset-v2 [Dataset]. https://huggingface.co/datasets/julienmercier/mobile-eye-tracking-dataset-v2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 12, 2023
    Authors
    Julien Mercier
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    julienmercier/mobile-eye-tracking-dataset-v2 dataset hosted on Hugging Face and contributed by the HF Datasets community

  9. Z

    JDC2014 - An eyetracking dataset from facilitating a semi-authentic...

    • data.niaid.nih.gov
    Updated Jan 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sharma, Kshitij (2020). JDC2014 - An eyetracking dataset from facilitating a semi-authentic multi-tabletop lesson [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_16515
    Explore at:
    Dataset updated
    Jan 21, 2020
    Dataset provided by
    Sharma, Kshitij
    Prieto, Luis P.
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset contains eye-tracking data from a single subject (a researcher), facilitating three collaborative learning lessons in a multi-tabletop classroom, with real 10-12 year old students. These sessions were recorded during an "open doors day" at the [CHILI Lab](http://chili.epfl.ch).

    This dataset has been used in several scientific works, such as the [CSCL 2015](http://isls.org/cscl2015/) conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration

  10. i

    Dual Eyetracking

    • beta.data.individualdevelopment.nl
    Updated Mar 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Dual Eyetracking [Dataset]. https://beta.data.individualdevelopment.nl/dataset/cbd368688983cc838def8adc80bc73c7
    Explore at:
    Dataset updated
    Mar 28, 2023
    Description

    We used a dual eye-tracking setup that is capable of concurrently recording eye movements, frontal video, and audio during video-mediated face-to-face interactions between parents and their preadolescent children. Parent–child dyads engaged in conversations about cooperative and conflictive family topics. Each conversation lasted for approximately 5 minutes.

  11. Eye Movement Data Set for Desktop Activities

    • kaggle.com
    Updated Jan 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Namrata Srivastava (2022). Eye Movement Data Set for Desktop Activities [Dataset]. https://www.kaggle.com/namratasri01/eye-movement-data-set-for-desktop-activities/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 3, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Namrata Srivastava
    Description

    Abstract

    The data set consists of raw gaze coordinates (x-y) of 24 participants while doing 8 desktop activities.

    Dataset Information

    The dataset consists of raw gaze coordinates of 24 participants while they were performing 8 desktop activities – Read, Browse, Play, Search, Watch, Write, Debug, and Interpret. All the activities except Watch were 5 minutes long. The eye movements were recorded using a desktop mounted Tobii X2-30 eye tracker and Tobii Pro Studio software.

    Citation Request

    Please cite the below paper if you are using this dataset.

    Srivastava, N., Newn, J., & Velloso, E. (2018). Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(4), 189.

  12. D

    Global Eye Tracking Market – Industry Trends and Forecast to 2030

    • databridgemarketresearch.com
    Updated Oct 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Bridge Market Research (2023). Global Eye Tracking Market – Industry Trends and Forecast to 2030 [Dataset]. https://www.databridgemarketresearch.com/reports/global-eye-tracking-market
    Explore at:
    Dataset updated
    Oct 2023
    Dataset authored and provided by
    Data Bridge Market Research
    License

    https://www.databridgemarketresearch.com/privacy-policyhttps://www.databridgemarketresearch.com/privacy-policy

    Time period covered
    2023 - 2030
    Area covered
    Global
    Description

    Report Metric

    Details

    Forecast Period

    2023 to 2030

    Base Year

    2022

    Historic Years

    2021 (Customizable to 2015-2020)

    Quantitative Units

    Revenue in USD Million, Volumes in Units, Pricing in USD

    Segments Covered

    Offering (Hardware, Software, Services, Research and Consulting Services), Tracking Type (Remote Tracking and Mobile Tracking), Application (Assistive Communication, and Human Behavior and Market Research, Others), Vertical (Retail and Advertisement, Consumer Electronics, Healthcare and Research Labs, Government, Defense, and Aerospace, Automotive and Transportation, Others)

    Countries Covered

    U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa

    Market Players Covered

    Tobii AB (Sweden), SR Research Ltd. (Canada), Seeing Machines (Australia), EyeTracking Inc. (U.S.), Ergoneers GmbH (Germany), Pupil Labs GmbH (Germany), PRS IN VIVO (U.S.), Lumen Research Ltd. (U.K.), BIOPAC Systems Inc. (U.S.), EyeTech Digital Systems, Inc. (U.S.), FOVE, Inc. (Japan), GAZE INTELLIGENCE (Canada), Gazepoint (Canada), iMotions (Denmark), LC TECHNOLOGIES (U.S.), Mirametrix Inc. (Canada), Noldus Information Technology (Netherlands), Smart Eye AB (Sweden), SMI GROUP (Germany)

    Market Opportunities

    • Eye-tracking technology in the AR/VR devices
    • Eye-tracking technology in automotive and transportation industry
  13. Z

    ISL2015NOVEL - An eyetracking dataset from facilitating secondary...

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prieto, Luis P. (2020). ISL2015NOVEL - An eyetracking dataset from facilitating secondary multi-tabletop classrooms [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_198681
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Sharma, Kshitij
    Prieto, Luis P.
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    IMPORTANT NOTE: One of the files in this dataset is incorrect, see this dataset's erratum at https://zenodo.org/record/203958

    This dataset contains eye-tracking data from a single subject (an experienced teacher), facilitating two geometry lessons in a secondary school classroom, with 11-12 year old students using tangible paper tabletops and a projector. These sessions were recorded in the frame of the MIOCTI project (http://chili.epfl.ch/miocti).

    This dataset has been used in several scientific works, such a submitted journal paper "Orchestration Load Indicators and Patterns: In-the-wild Studies Using Mobile Eye-tracking", by Luis P. Prieto, Kshitij Sharma, Lukasz Kidzinski & Pierre Dillenbourg (the analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/paper-IEEETLT-orchestrationload)

  14. d

    Data from: An extensive dataset of eye movements during viewing of complex...

    • search.dataone.org
    • datadryad.org
    Updated Jan 26, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wilming, Niklas; Onat, Selim; Ossandón, José; Acik, Alper; Kietzmann, Tim Christian; Kaspar, Kai; Gameiro, Ricardo R; Vormberg, Alexandra; König, Peter (2017). Data from: An extensive dataset of eye movements during viewing of complex images [Dataset]. http://doi.org/10.5061/dryad.9pf75
    Explore at:
    Dataset updated
    Jan 26, 2017
    Dataset provided by
    Dryad Digital Repository
    Authors
    Wilming, Niklas; Onat, Selim; Ossandón, José; Acik, Alper; Kietzmann, Tim Christian; Kaspar, Kai; Gameiro, Ricardo R; Vormberg, Alexandra; König, Peter
    Description

    We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7-80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.

  15. An eye tracking dataset for building façade inspection

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Sep 30, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad R. Saleem; Muhammad R. Saleem; Rebecca Napolitano; Rebecca Napolitano (2022). An eye tracking dataset for building façade inspection [Dataset]. http://doi.org/10.5281/zenodo.7125956
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 30, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Muhammad R. Saleem; Muhammad R. Saleem; Rebecca Napolitano; Rebecca Napolitano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains eye-tracking data of ten participants (students) for building facade inspection of two structures. The participants are in between their mid-twenties to thirties. The sessions were recorded for the preliminary eye tracking study to understand the inspector's reasoning and sense-making for damage assessment. The dataset was collected using Pro Glasses 3 wearable eye tracking system from Tobii Technology and further post-processing was done using Pro Lab software for data analysis purposes.

  16. o

    Data from: The Provo Corpus: A Large Eye-Tracking Corpus with Predictability...

    • osf.io
    Updated Nov 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Steven Luke (2022). The Provo Corpus: A Large Eye-Tracking Corpus with Predictability Norms [Dataset]. https://osf.io/sjefs
    Explore at:
    Dataset updated
    Nov 7, 2022
    Dataset provided by
    Center For Open Science
    Authors
    Steven Luke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Provo Corpus is a corpus of eye-tracking data with accompanying predictability norms. The predictability norms for the Provo Corpus differ from those of other corpora. In addition to traditional cloze scores that estimate the predictability of the full orthographic form of each word, the Provo Corpus also includes measures of the predictability of morpho-syntactic and semantic information for each word. This makes the Provo Corpus ideal for studying predictive processes in reading. Some analyses using these data have previously been reported elsewhere [Luke, S. G., and Christianson, K. (2016). Limits on lexical prediction during reading. Cognitive Psychology, 88, 22-60.]. Details about the content of the corpus can be found in our paper in Behavior Research Methods [Luke, S.G. and Christianson, K. (Submitted) The Provo Corpus: A Large Eye-Tracking Corpus with Predictability Norms].

  17. i

    Data from: GSET Somi: A Game-Specific Eye Tracking Dataset for Somi

    • ieee-dataport.org
    • commons.datacite.org
    Updated Aug 1, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hamed Ahmadi (2020). GSET Somi: A Game-Specific Eye Tracking Dataset for Somi [Dataset]. http://doi.org/10.21227/bvt7-3b15
    Explore at:
    Dataset updated
    Aug 1, 2020
    Dataset provided by
    IEEE Dataport
    Authors
    Hamed Ahmadi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is an eye tracking dataset of 84 computer game players who played the side-scrolling cloud game Somi. The game was streamed in the form of video from the cloud to the player. The dataset consists of 135 raw videos (YUV) at 720p and 30 fps with eye tracking data for both eyes (left and right). Male and female players were asked to play the game in front of a remote eye-tracking device. For each player, we recorded gaze points, video frames of the gameplay, and mouse and keyboard commands. For each video frame, a list of its game objects with their locations and sizes was also recorded. This data, synchronized with eye-tracking data, allows one to calculate the amount of attention that each object or group of objects draw from each player. This dataset can be used for designing and testing game-specific visual attention models.

  18. f

    Eyetracking 2018. Dataset 1 and 2.

    • figshare.com
    txt
    Updated Jul 30, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Blinded Researcher (2018). Eyetracking 2018. Dataset 1 and 2. [Dataset]. http://doi.org/10.6084/m9.figshare.6876455.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 30, 2018
    Dataset provided by
    figshare
    Authors
    Blinded Researcher
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Datasets described in the manuscript: 'Empathy Modulates the Temporal Structure of Social Attention'Dataset1.txt.Column names.1. X coordinate2. Y coordinate3. Timestamp (ms)4. Participant5. Trial6. Codes whether the stimulus is intact or scrambled (1= intact, 2 = scrambled).7. Codes whether gaze is in the social AOI (boolean).8. Codes whether gaze is in the nonsocial AOI (boolean).9. Codes the presence of trackloss (boolean)10. The observer's EQ score.Dataset2.txt.Column names.1. X coordinate2. Y coordinate3. Codes the side of the social stimulus4. Timestamp (ms)5. Participant6. Trial7. Codes whether gaze is in the left AOI (boolean)8. Codes whether gaze is in the right AOI (boolean)9. Codes whether the stimulus is intact or scrambled10. Codes the AOI that gaze is directed in (see next 2 columns)11. Whether the gaze is in the social AOI (boolean).12. Whether the gaze is in the nonsocial AOI (boolean).13. A column indicating the presence of trackloss (boolean)14. The observer's EQ score.

  19. n

    Eye Tracking Market Size and Share | Statistics - 2030

    • nextmsc.com
    csv
    Updated Nov 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Next Move Strategy Consulting (2023). Eye Tracking Market Size and Share | Statistics - 2030 [Dataset]. https://www.nextmsc.com/report/eye-tracking-market
    Explore at:
    csvAvailable download formats
    Dataset updated
    Nov 2023
    Dataset authored and provided by
    Next Move Strategy Consulting
    License

    https://www.nextmsc.com/return-policyhttps://www.nextmsc.com/return-policy

    Description

    Market Definition

    The global Eye Tracking Market size was valued at USD 913.6 million in 2023, and is predicted to reach USD 4909.7 million by 2030, with a CAGR of 26.0% from 2024 to 2030. Eye tracking, also known as gaze tracking is a sophisticated technology that measures and analyzes the movements, gaze direction, and fixation points of a person's eyes. This technology enables a comprehensive understanding of where a person's attention is directed and how their eyes move while observing their surroundings or engaging with visual stimuli. Researchers and professionals utilize this technology to glean insights into diverse aspects of human behavior, cognition, and visual perception.

    The applications of eye tracking span across different fields, such as psychology, market research, user experience testing, and human-computer interaction. In psychology, it aids in comprehending how individuals process visual information, make decisions, and respond to stimuli. Market researchers employ it to evaluate consumer preferences, discerning which aspects of advertisements or products garner the most attention. In user experience and human-computer interaction, eye tracking furnishes valuable insights into how users interact with digital interfaces and websites, leading to design enhancements for improved usability.

    By offering a meticulous and impartial analysis of visual attention, eye tracking stands as a pivotal tool for gaining profound insights into human visual behavior across various contexts. It has become an indispensable technology for researchers, designers, and professionals striving to refine their products, services, and user experiences.

    Growing Advertisement and Consumer Research Boost the Market Growth

    The retail industry, especially the fast-moving consumer goods (FMCG) sector, increasingly uses eye-tracking technology to boost sales revenue. Eye-tracking devices track where consumers look, and algorithms use this data to de

  20. Dual Eyetracking

    • commons.datacite.org
    Updated Jun 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Youth of Utrecht (2024). Dual Eyetracking [Dataset]. http://doi.org/10.60641/cqgb-r705
    Explore at:
    Dataset updated
    Jun 26, 2024
    Dataset provided by
    DataCitehttps://www.datacite.org/
    Authors
    Youth of Utrecht
    License

    https://www.uu.nl/en/research/youth-cohort-study/data-accesshttps://www.uu.nl/en/research/youth-cohort-study/data-access

    Description

    YOUth is a large-scale longitudinal cohort study following children from the city of Utrecht and its surrounding areas in their development from pregnancy until early adulthood. The YOUth cohort focuses on neurocognitive development involved in two core characteristics of behavioral development: social competence and behavioral control. YOUth includes children from the general population to cover the whole range of variation in behavioral development, ranging from uncomplicated development, through problem behavior, to psychiatric disorders. To understand why some children develop problematic behavior, and others show resilience, YOUth measures a broad range of biological, child-related and environmental determinants.

    YOUth conducts repeated measurements at regular intervals (i.e. 'waves'). Specifically, the study has two inclusion moments: YOUth Baby & Child and YOUth Child & Adolescent. YOUth applies a flexible longitudinal design to the cohorts, meaning that children are measured at broader age ranges (3-year age ranges) at each wave. The main benefit of the flexible age design is that it provides more detailed information on the neurodevelopmental curves over time.

    An extensive data set is generated, including 3D-ultrasound sweeps of the fetal brain, eye tracking, EEG, (f)MRI, computer tasks, cognitive measurements and parent-child observations. We also collect a broad range of questionnaires on behavior, personality, health, lifestyle, parenting, child development, use of (social) media and more. Finally, (umbilical) blood samples, buccal swabs, saliva and hair samples are collected at each visit, and stored in the UMC Utrecht Biobank.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Alessandro Bruno (2021). ETFP (Eye-Tracking and Fixation Points) [Dataset]. http://doi.org/10.21227/0d1h-vb68

Data from: ETFP (Eye-Tracking and Fixation Points)

Related Article
Explore at:
Dataset updated
Mar 19, 2021
Dataset provided by
IEEE Dataport
Authors
Alessandro Bruno
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions. The primary purposes behind the two datasets are to study and analyse, respectively, colourblindness and object-attention. A brief description of the experimental sessions and settings for both EToCVD and ETTO is given down below.EToCVD: The experimental sessions for EToCVD involved eight subjects with a fully efficient colour vision perception and eight participants with a colour-deficient vision system. More precisely, three subjects were affected by deuteranopia, while the other five were affected by protanopia. We conducted two experimental eye-tracking sessions: the first was focused on detecting how different the fixation points among the two cohorts. The first one is needed to assess our method's effectiveness in enhancing the images for colour blind people. Both eye-tracking sessions consist of repeating the same procedures. The first session also includes a test with Ishihara plates to evaluate which kind of colour vision deficiency the subjects were affected.ETTO: The primary purpose of ETTO is to investigate the relationships between saliency and object visual attention processes. A computer showed each image at full resolution for a time frame of three seconds, separated by one second of viewing a grey screen. The database consists of several pictures with single objects in the foreground and a homogeneous coloured background region. ETTO has been used to assess saliency methods' effectiveness based on different computational and perceptual approaches concerning the object attention process. The experimental sessions have been conducted in a half-light room. The participants were kept almost 70 cm off a 22-inch monitor having a spatial resolution of 1,920 by 1,080 pixels. During the eye-tracking session, a Tobii EyeX device recorded the eye movements, the saccadic movements, and the scan paths of each subject while looking at the images projected on the screen. For each subject, a calibration step was needed, in order, to minimise saccadic movement tracking errors, to compute and assess the geometry of the setup (e.g., screen size, distance, etc.), and to collect measurements of light refractions and reflection properties of the corneas of each subject. Rather than using the standard Tobii EyeX Engine calibration (nine-point calibration step), we used Tobii MATLAB Toolbox 3.1 calibration, whose procedure relies on a set of 13 points. Viewers were shown each image for 3 seconds, while Tobii EyeX acquired the eye movements' spatial coordinates. The eye-tracker collected, on average, 160 spatial coordinates per 3 seconds because of the frequency rate of 55 Hz). Before switching to the next image, the screen turned grey for 1 second to refresh the observer retina from the previous image signal.

Search
Clear search
Close search
Google apps
Main menu