Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions. The primary purposes behind the two datasets are to study and analyse, respectively, colourblindness and object-attention. A brief description of the experimental sessions and settings for both EToCVD and ETTO is given down below.EToCVD: The experimental sessions for EToCVD involved eight subjects with a fully efficient colour vision perception and eight participants with a colour-deficient vision system. More precisely, three subjects were affected by deuteranopia, while the other five were affected by protanopia. We conducted two experimental eye-tracking sessions: the first was focused on detecting how different the fixation points among the two cohorts. The first one is needed to assess our method's effectiveness in enhancing the images for colour blind people. Both eye-tracking sessions consist of repeating the same procedures. The first session also includes a test with Ishihara plates to evaluate which kind of colour vision deficiency the subjects were affected.ETTO: The primary purpose of ETTO is to investigate the relationships between saliency and object visual attention processes. A computer showed each image at full resolution for a time frame of three seconds, separated by one second of viewing a grey screen. The database consists of several pictures with single objects in the foreground and a homogeneous coloured background region. ETTO has been used to assess saliency methods' effectiveness based on different computational and perceptual approaches concerning the object attention process. The experimental sessions have been conducted in a half-light room. The participants were kept almost 70 cm off a 22-inch monitor having a spatial resolution of 1,920 by 1,080 pixels. During the eye-tracking session, a Tobii EyeX device recorded the eye movements, the saccadic movements, and the scan paths of each subject while looking at the images projected on the screen. For each subject, a calibration step was needed, in order, to minimise saccadic movement tracking errors, to compute and assess the geometry of the setup (e.g., screen size, distance, etc.), and to collect measurements of light refractions and reflection properties of the corneas of each subject. Rather than using the standard Tobii EyeX Engine calibration (nine-point calibration step), we used Tobii MATLAB Toolbox 3.1 calibration, whose procedure relies on a set of 13 points. Viewers were shown each image for 3 seconds, while Tobii EyeX acquired the eye movements' spatial coordinates. The eye-tracker collected, on average, 160 spatial coordinates per 3 seconds because of the frequency rate of 55 Hz). Before switching to the next image, the screen turned grey for 1 second to refresh the observer retina from the previous image signal.
From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost $2.5M$ frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10 - 15fps) on a modern mobile device. Our model achieves a prediction error of 1.7cm and 2.5cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.3cm and 2.1cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Efficient evaluation strategies are essential when reviewing computer code for trustworthiness and potential reuse. Previous researchers have examined factors that influence these assessments, and the HSMC proposes two information processing strategies to explain this process: heuristic and systematic processing. However, researchers have yet to empirically demonstrate the direct influence of the specific factors that affect cognitive effort, which can be inferred through eye-tracking metrics. Programmers (N = 52) were recruited to complete a Java code review task. We manipulated the source, readability, and organization of a single code piece to varying degrees and analyzed the effects of these factors on eye-tracking data (i.e., fixation count, average fixation duration, total fixation duration) and self-report data (i.e., perceived trustworthiness of the code, reuse intentions). Neither reuse intentions nor trustworthiness perceptions significantly differed across conditions. However, analyses of the eye-tracking data revealed increases in fixation counts and durations were present for code that was degraded, suggesting that more systematic processing was occurring in degraded code conditions compared to highly organized, highly readable code from a reputable source. An exploratory analysis of the AOIs containing readability and organization degradations revealed that misuse of case and misuse of declarations garnered the most attention from participants relative to the rest of the code piece. The implications of the current study extend to recommendations for writing code that is easily reusable by decreasing the cognitive effort needed for code review.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains eye-tracking data from a two subjects (an expert and a novice teachers), facilitating three collaborative learning lessons (2 for the expert, 1 for the novice) in a classroom with laptops and a projector, with real master-level students. These sessions were recorded during a course on the topic of digital education and learning analytics at [EPFL](http://epfl.ch).
This dataset has been used in several scientific works, such as the [CSCL 2015](http://isls.org/cscl2015/) conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: This study aims to publish an eye-tracking dataset developed for the purpose of autism diagnosis. Eye-tracking methods are used intensively in that context, whereas abnormalities of the eye gaze are largely recognised as the hallmark of autism. As such, it is believed that the dataset can allow for developing useful applications or discovering interesting insights. As well, Machine Learning is a potential application for developing diagnostic models that can help detect autism at an early stage of development.
Dataset Description: The dataset is distributed over 25 CSV-formatted files. Each file represents the output of an eye-tracking experiment. However, a single experiment usually included multiple participants. The participant ID is clearly provided at each record at the ‘Participant’ column, which can be used to identify the class of participant (i.e., Typically Developing or ASD). Furthermore, a set of metadata files is included. The main metadata file, Participants.csv, is used to describe the key characteristics of participants (e.g. gender, age, CARS). Every participant was also assigned a unique ID.
Dataset Citation: Cilia, F., Carette, R., Elbattah, M., Guérin, J., & Dequen, G. (2022). Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder. In Proceedings of the IJCAI–ECAI Workshop on Scarce Data in Artificial Intelligence for Healthcare (SDAIH).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains behavioural and eye-tracking data for:Murphy PR, Wilming N, Hernandez Bocanegra DC, Prat Ortega G & Donner TH (2021). Adaptive circuit dynamics across human cortex during evidence accumulation in changing environments. Nature Neuroscience. Online ahead of print.
Each .zip file contains all data for a single participant and is organized as follows: data from each experimental session are contained in their own folder (S1, S2, etc.); each session folder in turn contains separate Sample_seqs, Behaviour and Eyetracking subfolders.
The Sample_seqs folder contains Matlab .mat files (labelled ID_SESS_BLOCK.mat, where ID is the participant ID, SESS is the experimental session and BLOCK is the block number within that session) with information about the trial-specific stimulus sequences presented to the participant. The variables in each of these files are:
gen – structure containing the generative statistics of the task
stim – structure containing details about the physical presentation of the stimuli (see task script on Donnerlab Github for explanation of these)
timing – structure containing details about the timing of stimulus presentation (see task script on Donnerlab Github for explanation of these)
pshort – proportion of trials with stimulus sequences that were shorter than the full sequence length
stimIn – trials*samples matrix of stimulus locations (in polar angle with horizontal midline = 0 degrees; NaN marks trials sequences that were shorter than the full sequence length)
distseqs – trials*samples matrix of which generative distribution was used to draw each sample location
pswitch – trials*samples matrix of binary flags marking when a switch in generative distribution occurred
The Behaviour folder contains Matlab .mat files (same naming scheme as above) with information about the behaviour produced by the participant on each trial of the task. The main variable in each file is a matrix called Behav for which each row is a trial and columns are the following:
column 1 – the generative distribution used to draw the final sample location on each trial (and thus, the correct response)
column 2 – the response given by the participant
column 3 – the accuracy of the participant’s response
column 4 – response time relative to Go cue
column 5 – trial onset according to psychtoolbox clock
column 6 – number of times participant broke fixation during trial, according to online detection algorithm
Each .mat file also contains a trials*samples matrix (tRefresh) of the timings of monitor flips corresponding to the onsets of each sample (and made relative to trial onset), as provided by psychtoolbox.
The Eyetracking folder contains both raw Eyelink 1000 (SR Research) .edf files, and their conversions to .asc text files using the manufacturer’s edf2asc utility (same naming scheme as above). For stimulus and response trigger information, see task scripts on Donnerlab Github..zip file names ending in _2.zip correspond to the four participants from Experiment 2 of the paper, for whom sample-onset-asynchrony (SOA) was manipulated across two conditions (0.2 vs 0.6 s). All other participants are from Experiment 1, where SOA was fixed at 0.4 s.For example code for analyzing behaviour, fitting behavioural models, and analyzing pupil data, see https://github.com/DonnerLab/2021_Murphy_Adaptive-Circuit-Dynamics-Across-Human-Cortex.
https://www.futuremarketinsights.com/privacy-policyhttps://www.futuremarketinsights.com/privacy-policy
The eye tracking system market is envisioned to reach a value of US$ 1.90 billion in 2024 and register an incredible CAGR of 26.40% from 2024 to 2034. The market is foreseen to surpass US$ 19.76 billion by 2034. The emergence of vision capture technology services in retail, research, automotive, healthcare, and consumer electronics has immensely propelled the eye tracing system industry.
Attributes | Details |
---|---|
Market Value for 2024 | US$ 1.90 billion |
Market Value for 2034 | US$ 19.76 billion |
Market Forecast CAGR for 2024 to 2034 | 26.40% |
2019 to 2023 Historical Analysis vs. 2024 to 2034 Market Forecast Projection
Attributes | Details |
---|---|
Market Historical CAGR for 2019 to 2023 | 24.20% |
Category-wise Insights
Attributes | Details |
---|---|
Top System Orientation | Wearable Eye Tracking Systems |
Market share in 2024 | 44.2% |
Attributes | Details |
---|---|
Top Sampling Rate | 61 to 120 Hz |
Market share in 2024 | 28.3% |
Country-wise Insights
Countries | CAGR from 2024 to 2034 |
---|---|
United States | 23.20% |
Germany | 21.80% |
China | 26.90% |
Japan | 21.10% |
Australia | 29.90% |
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
julienmercier/mobile-eye-tracking-dataset-v2 dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains eye-tracking data from a single subject (a researcher), facilitating three collaborative learning lessons in a multi-tabletop classroom, with real 10-12 year old students. These sessions were recorded during an "open doors day" at the [CHILI Lab](http://chili.epfl.ch).
This dataset has been used in several scientific works, such as the [CSCL 2015](http://isls.org/cscl2015/) conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration
We used a dual eye-tracking setup that is capable of concurrently recording eye movements, frontal video, and audio during video-mediated face-to-face interactions between parents and their preadolescent children. Parent–child dyads engaged in conversations about cooperative and conflictive family topics. Each conversation lasted for approximately 5 minutes.
The data set consists of raw gaze coordinates (x-y) of 24 participants while doing 8 desktop activities.
The dataset consists of raw gaze coordinates of 24 participants while they were performing 8 desktop activities – Read, Browse, Play, Search, Watch, Write, Debug, and Interpret. All the activities except Watch were 5 minutes long. The eye movements were recorded using a desktop mounted Tobii X2-30 eye tracker and Tobii Pro Studio software.
Please cite the below paper if you are using this dataset.
Srivastava, N., Newn, J., & Velloso, E. (2018). Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(4), 189.
https://www.databridgemarketresearch.com/privacy-policyhttps://www.databridgemarketresearch.com/privacy-policy
Report Metric |
Details |
Forecast Period |
2023 to 2030 |
Base Year |
2022 |
Historic Years |
2021 (Customizable to 2015-2020) |
Quantitative Units |
Revenue in USD Million, Volumes in Units, Pricing in USD |
Segments Covered |
Offering (Hardware, Software, Services, Research and Consulting Services), Tracking Type (Remote Tracking and Mobile Tracking), Application (Assistive Communication, and Human Behavior and Market Research, Others), Vertical (Retail and Advertisement, Consumer Electronics, Healthcare and Research Labs, Government, Defense, and Aerospace, Automotive and Transportation, Others) |
Countries Covered |
U.S., Canada, Mexico, Brazil, Argentina, Rest of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific, Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa |
Market Players Covered |
Tobii AB (Sweden), SR Research Ltd. (Canada), Seeing Machines (Australia), EyeTracking Inc. (U.S.), Ergoneers GmbH (Germany), Pupil Labs GmbH (Germany), PRS IN VIVO (U.S.), Lumen Research Ltd. (U.K.), BIOPAC Systems Inc. (U.S.), EyeTech Digital Systems, Inc. (U.S.), FOVE, Inc. (Japan), GAZE INTELLIGENCE (Canada), Gazepoint (Canada), iMotions (Denmark), LC TECHNOLOGIES (U.S.), Mirametrix Inc. (Canada), Noldus Information Technology (Netherlands), Smart Eye AB (Sweden), SMI GROUP (Germany) |
Market Opportunities |
|
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
IMPORTANT NOTE: One of the files in this dataset is incorrect, see this dataset's erratum at https://zenodo.org/record/203958
This dataset contains eye-tracking data from a single subject (an experienced teacher), facilitating two geometry lessons in a secondary school classroom, with 11-12 year old students using tangible paper tabletops and a projector. These sessions were recorded in the frame of the MIOCTI project (http://chili.epfl.ch/miocti).
This dataset has been used in several scientific works, such a submitted journal paper "Orchestration Load Indicators and Patterns: In-the-wild Studies Using Mobile Eye-tracking", by Luis P. Prieto, Kshitij Sharma, Lukasz Kidzinski & Pierre Dillenbourg (the analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/paper-IEEETLT-orchestrationload)
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7-80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains eye-tracking data of ten participants (students) for building facade inspection of two structures. The participants are in between their mid-twenties to thirties. The sessions were recorded for the preliminary eye tracking study to understand the inspector's reasoning and sense-making for damage assessment. The dataset was collected using Pro Glasses 3 wearable eye tracking system from Tobii Technology and further post-processing was done using Pro Lab software for data analysis purposes.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Provo Corpus is a corpus of eye-tracking data with accompanying predictability norms. The predictability norms for the Provo Corpus differ from those of other corpora. In addition to traditional cloze scores that estimate the predictability of the full orthographic form of each word, the Provo Corpus also includes measures of the predictability of morpho-syntactic and semantic information for each word. This makes the Provo Corpus ideal for studying predictive processes in reading. Some analyses using these data have previously been reported elsewhere [Luke, S. G., and Christianson, K. (2016). Limits on lexical prediction during reading. Cognitive Psychology, 88, 22-60.]. Details about the content of the corpus can be found in our paper in Behavior Research Methods [Luke, S.G. and Christianson, K. (Submitted) The Provo Corpus: A Large Eye-Tracking Corpus with Predictability Norms].
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an eye tracking dataset of 84 computer game players who played the side-scrolling cloud game Somi. The game was streamed in the form of video from the cloud to the player. The dataset consists of 135 raw videos (YUV) at 720p and 30 fps with eye tracking data for both eyes (left and right). Male and female players were asked to play the game in front of a remote eye-tracking device. For each player, we recorded gaze points, video frames of the gameplay, and mouse and keyboard commands. For each video frame, a list of its game objects with their locations and sizes was also recorded. This data, synchronized with eye-tracking data, allows one to calculate the amount of attention that each object or group of objects draw from each player. This dataset can be used for designing and testing game-specific visual attention models.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets described in the manuscript: 'Empathy Modulates the Temporal Structure of Social Attention'Dataset1.txt.Column names.1. X coordinate2. Y coordinate3. Timestamp (ms)4. Participant5. Trial6. Codes whether the stimulus is intact or scrambled (1= intact, 2 = scrambled).7. Codes whether gaze is in the social AOI (boolean).8. Codes whether gaze is in the nonsocial AOI (boolean).9. Codes the presence of trackloss (boolean)10. The observer's EQ score.Dataset2.txt.Column names.1. X coordinate2. Y coordinate3. Codes the side of the social stimulus4. Timestamp (ms)5. Participant6. Trial7. Codes whether gaze is in the left AOI (boolean)8. Codes whether gaze is in the right AOI (boolean)9. Codes whether the stimulus is intact or scrambled10. Codes the AOI that gaze is directed in (see next 2 columns)11. Whether the gaze is in the social AOI (boolean).12. Whether the gaze is in the nonsocial AOI (boolean).13. A column indicating the presence of trackloss (boolean)14. The observer's EQ score.
https://www.nextmsc.com/return-policyhttps://www.nextmsc.com/return-policy
Market Definition
The global Eye Tracking Market size was valued at USD 913.6 million in 2023, and is predicted to reach USD 4909.7 million by 2030, with a CAGR of 26.0% from 2024 to 2030. Eye tracking, also known as gaze tracking is a sophisticated technology that measures and analyzes the movements, gaze direction, and fixation points of a person's eyes. This technology enables a comprehensive understanding of where a person's attention is directed and how their eyes move while observing their surroundings or engaging with visual stimuli. Researchers and professionals utilize this technology to glean insights into diverse aspects of human behavior, cognition, and visual perception.
The applications of eye tracking span across different fields, such as psychology, market research, user experience testing, and human-computer interaction. In psychology, it aids in comprehending how individuals process visual information, make decisions, and respond to stimuli. Market researchers employ it to evaluate consumer preferences, discerning which aspects of advertisements or products garner the most attention. In user experience and human-computer interaction, eye tracking furnishes valuable insights into how users interact with digital interfaces and websites, leading to design enhancements for improved usability.
By offering a meticulous and impartial analysis of visual attention, eye tracking stands as a pivotal tool for gaining profound insights into human visual behavior across various contexts. It has become an indispensable technology for researchers, designers, and professionals striving to refine their products, services, and user experiences.
Growing Advertisement and Consumer Research Boost the Market Growth
The retail industry, especially the fast-moving consumer goods (FMCG) sector, increasingly uses eye-tracking technology to boost sales revenue. Eye-tracking devices track where consumers look, and algorithms use this data to de
https://www.uu.nl/en/research/youth-cohort-study/data-accesshttps://www.uu.nl/en/research/youth-cohort-study/data-access
YOUth is a large-scale longitudinal cohort study following children from the city of Utrecht and its surrounding areas in their development from pregnancy until early adulthood. The YOUth cohort focuses on neurocognitive development involved in two core characteristics of behavioral development: social competence and behavioral control. YOUth includes children from the general population to cover the whole range of variation in behavioral development, ranging from uncomplicated development, through problem behavior, to psychiatric disorders. To understand why some children develop problematic behavior, and others show resilience, YOUth measures a broad range of biological, child-related and environmental determinants.
YOUth conducts repeated measurements at regular intervals (i.e. 'waves'). Specifically, the study has two inclusion moments: YOUth Baby & Child and YOUth Child & Adolescent. YOUth applies a flexible longitudinal design to the cohorts, meaning that children are measured at broader age ranges (3-year age ranges) at each wave. The main benefit of the flexible age design is that it provides more detailed information on the neurodevelopmental curves over time.
An extensive data set is generated, including 3D-ultrasound sweeps of the fetal brain, eye tracking, EEG, (f)MRI, computer tasks, cognitive measurements and parent-child observations. We also collect a broad range of questionnaires on behavior, personality, health, lifestyle, parenting, child development, use of (social) media and more. Finally, (umbilical) blood samples, buccal swabs, saliva and hair samples are collected at each visit, and stored in the UMC Utrecht Biobank.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions. The primary purposes behind the two datasets are to study and analyse, respectively, colourblindness and object-attention. A brief description of the experimental sessions and settings for both EToCVD and ETTO is given down below.EToCVD: The experimental sessions for EToCVD involved eight subjects with a fully efficient colour vision perception and eight participants with a colour-deficient vision system. More precisely, three subjects were affected by deuteranopia, while the other five were affected by protanopia. We conducted two experimental eye-tracking sessions: the first was focused on detecting how different the fixation points among the two cohorts. The first one is needed to assess our method's effectiveness in enhancing the images for colour blind people. Both eye-tracking sessions consist of repeating the same procedures. The first session also includes a test with Ishihara plates to evaluate which kind of colour vision deficiency the subjects were affected.ETTO: The primary purpose of ETTO is to investigate the relationships between saliency and object visual attention processes. A computer showed each image at full resolution for a time frame of three seconds, separated by one second of viewing a grey screen. The database consists of several pictures with single objects in the foreground and a homogeneous coloured background region. ETTO has been used to assess saliency methods' effectiveness based on different computational and perceptual approaches concerning the object attention process. The experimental sessions have been conducted in a half-light room. The participants were kept almost 70 cm off a 22-inch monitor having a spatial resolution of 1,920 by 1,080 pixels. During the eye-tracking session, a Tobii EyeX device recorded the eye movements, the saccadic movements, and the scan paths of each subject while looking at the images projected on the screen. For each subject, a calibration step was needed, in order, to minimise saccadic movement tracking errors, to compute and assess the geometry of the setup (e.g., screen size, distance, etc.), and to collect measurements of light refractions and reflection properties of the corneas of each subject. Rather than using the standard Tobii EyeX Engine calibration (nine-point calibration step), we used Tobii MATLAB Toolbox 3.1 calibration, whose procedure relies on a set of 13 points. Viewers were shown each image for 3 seconds, while Tobii EyeX acquired the eye movements' spatial coordinates. The eye-tracker collected, on average, 160 spatial coordinates per 3 seconds because of the frequency rate of 55 Hz). Before switching to the next image, the screen turned grey for 1 second to refresh the observer retina from the previous image signal.