Saved datasets
Last updated
Download format
Croissant
Croissant is a format for Machine Learning datasets
Learn more about this at mlcommons.org/croissant.
Usage rights
License from data provider
Please review the applicable license to make sure your contemplated use is permitted.
Topic
Provider
Free
Cost to access
Described as free to access or have a license that allows redistribution.
100+ datasets found
  1. Data from: ETFP (Eye-Tracking and Fixation Points)

    • ieee-dataport.org
    Updated Mar 19, 2021
  2. P

    GazeCapture Dataset

    • paperswithcode.com
    Updated Jun 13, 2016
    + more versions
  3. Using Eye-Tracking Data - Dataset (cleaned, N = 44)

    • ieee-dataport.org
    Updated Jan 13, 2022
  4. Jetris - An eyetracking dataset from a Tetris-like task

    • zenodo.org
    bin, zip
    Updated Jan 24, 2020
  5. ISL2015NOVEL - An eyetracking dataset from facilitating secondary...

    • zenodo.org
    • explore.openaire.eu
    bin, zip
    Updated Jan 24, 2020
  6. h

    eyetracking

    • huggingface.co
    Updated Apr 29, 2023
  7. p

    Eye Tracking Dataset for the 12-Lead Electrocardiogram Interpretation of...

    • physionet.org
    Updated Mar 16, 2022
  8. Eye-tracking data while assessing

    • osf.io
    Updated Oct 13, 2022
  9. T

    Eye Tracking System Market Forecast by Remote and Wearable Eye Tracking...

    • futuremarketinsights.com
    csv, pdf
    Updated Apr 22, 2024
  10. DELANA - An eyetracking dataset from facilitating a series of laptop-based...

    • zenodo.org
    • search.datacite.org
    bin, zip
    Updated Jan 21, 2020
  11. f

    Behavioral and Eye-tracking Data for Adaptive Circuit Dynamics Across Human...

    • figshare.com
    zip
    Updated May 30, 2023
  12. Eye Tracker Data

    • figshare.com
    zip
    Updated Oct 15, 2022
  13. d

    Data from: An extensive dataset of eye movements during viewing of complex...

    • datadryad.org
    • search.dataone.org
    zip
    Updated Dec 9, 2017
    + more versions
  14. h

    mobile-eye-tracking-dataset-v3

    • huggingface.co
    Updated Aug 12, 2023
    + more versions
  15. i

    Dual Eyetracking

    • beta.data.individualdevelopment.nl
    Updated Mar 28, 2023
  16. D

    Global Eye Tracking Market – Industry Trends and Forecast to 2030

    • databridgemarketresearch.com
    Updated Oct 2023
  17. s

    Eye Tracking Market Scope, Share to 2030

    • straitsresearch.com
    Updated Oct 15, 2019
  18. Data from: GSET Somi: A Game-Specific Eye Tracking Dataset for Somi

    • ieee-dataport.org
    Updated Aug 1, 2020
  19. f

    Eyetracking 2018. Dataset 1 and 2.

    • figshare.com
    txt
    Updated Jul 30, 2018
  20. G

    Eye Tracking Market Size & Share Report, 2022 - 2030

    • grandviewresearch.com
    pdf
    Updated May 9, 2022
    + more versions
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Alessandro Bruno (2021). ETFP (Eye-Tracking and Fixation Points) [Dataset]. http://doi.org/10.21227/0d1h-vb68
Organization logo

Data from: ETFP (Eye-Tracking and Fixation Points)

Related Article
Explore at:
Dataset updated
Mar 19, 2021
Dataset provided by
Institute of Electrical and Electronics Engineershttp://www.ieee.ro/
Authors
Alessandro Bruno
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions. The primary purposes behind the two datasets are to study and analyse, respectively, colourblindness and object-attention. A brief description of the experimental sessions and settings for both EToCVD and ETTO is given down below.EToCVD: The experimental sessions for EToCVD involved eight subjects with a fully efficient colour vision perception and eight participants with a colour-deficient vision system. More precisely, three subjects were affected by deuteranopia, while the other five were affected by protanopia. We conducted two experimental eye-tracking sessions: the first was focused on detecting how different the fixation points among the two cohorts. The first one is needed to assess our method's effectiveness in enhancing the images for colour blind people. Both eye-tracking sessions consist of repeating the same procedures. The first session also includes a test with Ishihara plates to evaluate which kind of colour vision deficiency the subjects were affected.ETTO: The primary purpose of ETTO is to investigate the relationships between saliency and object visual attention processes. A computer showed each image at full resolution for a time frame of three seconds, separated by one second of viewing a grey screen. The database consists of several pictures with single objects in the foreground and a homogeneous coloured background region. ETTO has been used to assess saliency methods' effectiveness based on different computational and perceptual approaches concerning the object attention process. The experimental sessions have been conducted in a half-light room. The participants were kept almost 70 cm off a 22-inch monitor having a spatial resolution of 1,920 by 1,080 pixels. During the eye-tracking session, a Tobii EyeX device recorded the eye movements, the saccadic movements, and the scan paths of each subject while looking at the images projected on the screen. For each subject, a calibration step was needed, in order, to minimise saccadic movement tracking errors, to compute and assess the geometry of the setup (e.g., screen size, distance, etc.), and to collect measurements of light refractions and reflection properties of the corneas of each subject. Rather than using the standard Tobii EyeX Engine calibration (nine-point calibration step), we used Tobii MATLAB Toolbox 3.1 calibration, whose procedure relies on a set of 13 points. Viewers were shown each image for 3 seconds, while Tobii EyeX acquired the eye movements' spatial coordinates. The eye-tracker collected, on average, 160 spatial coordinates per 3 seconds because of the frequency rate of 55 Hz). Before switching to the next image, the screen turned grey for 1 second to refresh the observer retina from the previous image signal.

Search
Clear search
Close search
Google apps
Main menu