Saved datasets
Last updated
Download format
Croissant
Croissant is a format for Machine Learning datasets
Learn more about this at mlcommons.org/croissant.
Usage rights
License from data provider
Please review the applicable license to make sure your contemplated use is permitted.
Topic
Provider
Free
Cost to access
Described as free to access or have a license that allows redistribution.
100+ datasets found
  1. Data from: ETFP (Eye-Tracking and Fixation Points)

    • ieee-dataport.org
    Updated Mar 19, 2021
  2. Eyetracking 2018. Dataset 1 and 2.

    • figshare.com
    txt
    Updated Jul 30, 2018
  3. P

    GazeCapture Dataset

    • paperswithcode.com
    Updated Jun 13, 2016
    + more versions
  4. Jetris - An eyetracking dataset from a Tetris-like task

    • zenodo.org
    bin, zip
    Updated Jan 24, 2020
  5. Using Eye-Tracking Data - Dataset (cleaned, N = 44)

    • ieee-dataport.org
    Updated Jan 13, 2022
  6. f

    Data from: Eye-Tracking Dataset to Support the Research on Autism Spectrum...

    • figshare.com
    zip
    Updated May 30, 2023
  7. h

    eyetracking

    • huggingface.co
    Updated Apr 29, 2023
  8. JDC2014 - An eyetracking dataset from facilitating a semi-authentic...

    • zenodo.org
    bin, zip
    Updated Jan 21, 2020
  9. dataset-eyetracking

    • kaggle.com
    zip
    Updated Jun 23, 2020
  10. T

    Market Survey on Eye Tracking System Market Covering Sales Outlook,...

    • futuremarketinsights.com
    csv, pdf
    Updated Jul 13, 2022
  11. Eye-tracking data while assessing

    • osf.io
    Updated Oct 13, 2022
  12. f

    Eyetracking, sound localization latency in infants and young children

    • figshare.com
    xlsx
    Updated May 18, 2020
  13. DELANA - An eyetracking dataset from facilitating a series of laptop-based...

    • zenodo.org
    • search.datacite.org
    bin, zip
    Updated Jan 21, 2020
  14. p

    Eye Tracking Dataset for the 12-Lead Electrocardiogram Interpretation of...

    • physionet.org
    Updated Mar 16, 2022
  15. d

    Data from: An extensive dataset of eye movements during viewing of complex...

    • search.dataone.org
    • datadryad.org
    Updated May 10, 2018
    + more versions
  16. h

    mobile-eye-tracking-dataset-v2

    • huggingface.co
    Updated Aug 12, 2023
    + more versions
  17. ISL2015NOVEL - An eyetracking dataset from facilitating secondary...

    • zenodo.org
    • explore.openaire.eu
    bin, zip
    Updated Jan 24, 2020
  18. i

    Dual Eyetracking

    • beta.data.individualdevelopment.nl
    Updated Mar 28, 2023
  19. D

    Global Eye Tracking Market – Industry Trends and Forecast to 2030

    • databridgemarketresearch.com
    Updated Oct 2023
  20. n

    Eye Tracking Market Size and Share | Statistics - 2030

    • nextmsc.com
    csv
    Updated Nov 2023
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Alessandro Bruno (2021). ETFP (Eye-Tracking and Fixation Points) [Dataset]. http://doi.org/10.21227/0d1h-vb68
Organization logo

Data from: ETFP (Eye-Tracking and Fixation Points)

Related Article
Explore at:
Dataset updated
Mar 19, 2021
Dataset provided by
Institute of Electrical and Electronics Engineershttp://www.ieee.ro/
Authors
Alessandro Bruno
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions. The primary purposes behind the two datasets are to study and analyse, respectively, colourblindness and object-attention. A brief description of the experimental sessions and settings for both EToCVD and ETTO is given down below.EToCVD: The experimental sessions for EToCVD involved eight subjects with a fully efficient colour vision perception and eight participants with a colour-deficient vision system. More precisely, three subjects were affected by deuteranopia, while the other five were affected by protanopia. We conducted two experimental eye-tracking sessions: the first was focused on detecting how different the fixation points among the two cohorts. The first one is needed to assess our method's effectiveness in enhancing the images for colour blind people. Both eye-tracking sessions consist of repeating the same procedures. The first session also includes a test with Ishihara plates to evaluate which kind of colour vision deficiency the subjects were affected.ETTO: The primary purpose of ETTO is to investigate the relationships between saliency and object visual attention processes. A computer showed each image at full resolution for a time frame of three seconds, separated by one second of viewing a grey screen. The database consists of several pictures with single objects in the foreground and a homogeneous coloured background region. ETTO has been used to assess saliency methods' effectiveness based on different computational and perceptual approaches concerning the object attention process. The experimental sessions have been conducted in a half-light room. The participants were kept almost 70 cm off a 22-inch monitor having a spatial resolution of 1,920 by 1,080 pixels. During the eye-tracking session, a Tobii EyeX device recorded the eye movements, the saccadic movements, and the scan paths of each subject while looking at the images projected on the screen. For each subject, a calibration step was needed, in order, to minimise saccadic movement tracking errors, to compute and assess the geometry of the setup (e.g., screen size, distance, etc.), and to collect measurements of light refractions and reflection properties of the corneas of each subject. Rather than using the standard Tobii EyeX Engine calibration (nine-point calibration step), we used Tobii MATLAB Toolbox 3.1 calibration, whose procedure relies on a set of 13 points. Viewers were shown each image for 3 seconds, while Tobii EyeX acquired the eye movements' spatial coordinates. The eye-tracker collected, on average, 160 spatial coordinates per 3 seconds because of the frequency rate of 55 Hz). Before switching to the next image, the screen turned grey for 1 second to refresh the observer retina from the previous image signal.

Search
Clear search
Close search
Google apps
Main menu