Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains eye-tracking data from a two subjects (an expert and a novice teachers), facilitating three collaborative learning lessons (2 for the expert, 1 for the novice) in a classroom with laptops and a projector, with real master-level students. These sessions were recorded during a course on the topic of digital education and learning analytics at [EPFL](http://epfl.ch).
This dataset has been used in several scientific works, such as the [CSCL 2015](http://isls.org/cscl2015/) conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets described in the manuscript: 'Empathy Modulates the Temporal Structure of Social Attention'Dataset1.txt.Column names.1. X coordinate2. Y coordinate3. Timestamp (ms)4. Participant5. Trial6. Codes whether the stimulus is intact or scrambled (1= intact, 2 = scrambled).7. Codes whether gaze is in the social AOI (boolean).8. Codes whether gaze is in the nonsocial AOI (boolean).9. Codes the presence of trackloss (boolean)10. The observer's EQ score.Dataset2.txt.Column names.1. X coordinate2. Y coordinate3. Codes the side of the social stimulus4. Timestamp (ms)5. Participant6. Trial7. Codes whether gaze is in the left AOI (boolean)8. Codes whether gaze is in the right AOI (boolean)9. Codes whether the stimulus is intact or scrambled10. Codes the AOI that gaze is directed in (see next 2 columns)11. Whether the gaze is in the social AOI (boolean).12. Whether the gaze is in the nonsocial AOI (boolean).13. A column indicating the presence of trackloss (boolean)14. The observer's EQ score.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Eye Tracking is a dataset for object detection tasks - it contains Pupil annotations for 301 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost $2.5M$ frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10 - 15fps) on a modern mobile device. Our model achieves a prediction error of 1.7cm and 2.5cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.3cm and 2.1cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
julienmercier/mobile-eye-tracking-dataset-v2 dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
IMPORTANT NOTE: One of the files in this dataset is incorrect, see this dataset's erratum at https://zenodo.org/record/203958
This dataset contains eye-tracking data from a single subject (an experienced teacher), facilitating two geometry lessons in a secondary school classroom, with 11-12 year old students using tangible paper tabletops and a projector. These sessions were recorded in the frame of the MIOCTI project (http://chili.epfl.ch/miocti).
This dataset has been used in several scientific works, such a submitted journal paper "Orchestration Load Indicators and Patterns: In-the-wild Studies Using Mobile Eye-tracking", by Luis P. Prieto, Kshitij Sharma, Lukasz Kidzinski & Pierre Dillenbourg (the analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/paper-IEEETLT-orchestrationload)
The data set consists of raw gaze coordinates (x-y) of 24 participants while doing 8 desktop activities.
The dataset consists of raw gaze coordinates of 24 participants while they were performing 8 desktop activities – Read, Browse, Play, Search, Watch, Write, Debug, and Interpret. All the activities except Watch were 5 minutes long. The eye movements were recorded using a desktop mounted Tobii X2-30 eye tracker and Tobii Pro Studio software.
Please cite the below paper if you are using this dataset.
Srivastava, N., Newn, J., & Velloso, E. (2018). Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(4), 189.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains eye-tracking data from a single subject (a researcher), facilitating three collaborative learning lessons in a multi-tabletop classroom, with real 10-12 year old students. These sessions were recorded during an "open doors day" at the CHILI Lab.
This dataset has been used in several scientific works, such as the CSCL 2015 conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains eye-tracking data from a single subject (a researcher), facilitating two geometry lessons in a secondary school classroom, with 11-12 year old students using laptops and a projector. These sessions were recorded in the frame of the MIOCTI project (http://chili.epfl.ch/miocti).
This dataset has been used in several scientific works, such as the ECTEL 2015 (http://ectel2015.httc.de/) conference paper "Studying Teacher Orchestration Load in Technology-Enhanced Classrooms: A Mixed-method Approach and Case Study", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg (the analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/ectel2015-orchestration-school)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data set includes multimodal data (EEG, EyeTracking, GPS), which were collected as part of several field tests in the SmartHelm research project (FKZ: 19F2105B). In the project, an intelligent bicycle helmet equipped with an augmented reality (AR) display, an eye tracking module for detecting eye movements and electroencephalography (EEG) electrodes was developed. The aim is to present relevant information (e.g. addresses, parcels) for employees in the courier express parcel (CEP) industry in the area of application of bicycle delivery according to the measured level of attention. The SmartHelm project was funded by the mFUND programme of the Federal Ministry of Digital Affairs and Transport (BMDV) from November 2019 to the end of May 2023. The published data come from several studies conducted at three sites in Oldenburg (Lower Saxony) and Bremen between August and November 2021 and in October 2022. In 2021, the above-mentioned data were recorded while riding a (load) bicycle, simulating a fictitious parcel delivery situation. The studies in 2022 used a modified experimental setup with an alternative EEG system and EyeTracking module. In the latest studies, the subjects were able to actively provide feedback in situations in which they felt distracted. More information about the SmartHelm studies can be seen in a YouTube video. https://www.youtube.com/watch?v=2qlAzJY6frU The dataset provided shall include: EEG data from seven electrodes (eeg_value_1, eeg_value_2, eeg_value_3, eeg_value_4, eeg_value_5, eeg_value_6, eeg_value_7) with a measurement frequency of 250 Hz. These were measured using seven electrodes at defined head positions relevant for detecting the attention of the subjects (Cz, C3, T7, P3, P4, PO7, PO8) (2021). For 2022, the EEG data also include an eighth electrode (eeg_value_8), an accelerometer (acc_x, acc_y, acc_z) and a gyroscope (gyro_x, gyro_y, gyro_z). EyeTracking data (eye movements) with the distance of an object from the eye of the beholder (eye_gaze_x, eye_gaze_y, eye_gaze_z) and the reference points for eye positioning (eye_gaze_origin_x, eye_gaze_origin_y, eye_gaze_origin_z) in three directions with a measuring frequency of 30 Hz. For 2022, eye_gaze_x and eye_gaze_y are available in the video coordinate system at a frequency of approximately 60 Hz, as the distance of the objects to the viewer's eye was measured only in two directions (with an unconstant sampling rate). The reference points for eye positioning are not included for this period. GPS data with coordinates (latitude, longitude) and altitude with a measurement frequency of approx. 2 Hz. For 2021, speed (velocity) and accuracy (accuracy) are also indicated. The 2022 dataset additionally includes markers for annotated distractions with the type of distraction (visual, acoustic, mental) The provided data set consists of a total of 44 trips with a total data volume of about 4 GB.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: This study aims to publish an eye-tracking dataset developed for the purpose of autism diagnosis. Eye-tracking methods are used intensively in that context, whereas abnormalities of the eye gaze are largely recognised as the hallmark of autism. As such, it is believed that the dataset can allow for developing useful applications or discovering interesting insights. As well, Machine Learning is a potential application for developing diagnostic models that can help detect autism at an early stage of development.
Dataset Description: The dataset is distributed over 25 CSV-formatted files. Each file represents the output of an eye-tracking experiment. However, a single experiment usually included multiple participants. The participant ID is clearly provided at each record at the ‘Participant’ column, which can be used to identify the class of participant (i.e., Typically Developing or ASD). Furthermore, a set of metadata files is included. The main metadata file, Participants.csv, is used to describe the key characteristics of participants (e.g. gender, age, CARS). Every participant was also assigned a unique ID.
Dataset Citation: Cilia, F., Carette, R., Elbattah, M., Guérin, J., & Dequen, G. (2022). Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder. In Proceedings of the IJCAI–ECAI Workshop on Scarce Data in Artificial Intelligence for Healthcare (SDAIH).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We present eSEEd- emotional State Estimation based on Eye-tracking database. Eye movements of 48 participants were recorded as they watched 10 emotion evoking videos each of them followed by a neutral video. Participants rated five emotions (tenderness, anger, disgust, sadness, neutral) on a scale from 0 to 10, later translated in terms of emotional arousal and valence levels. Furthermore, each participant filled 3 self-assessment questionnaires. An extensive analysis of the participants' answers to the questionnaires self-assessment scores as well as their ratings during the experiments is presented. Moreover, eye and gaze features were extracted from the low level eye recorded metrics and their correlations with the participants' ratings are investigated. Finally, analysis and results are presented for machine learning approaches, for the classification of various arousal and valence levels based solely on eye and gaze features. The dataset is made publicly available and we encourage other researchers to use it for testing new methods and analytic pipelines for the estimation of an individual's affective state.
Important note: Version 0 contains only video files and readme. The eye-tracking data are going to be uploaded in the next version.
No description was included in this Dataset collected from the OSF
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Ps6 Eyetracking is a dataset for object detection tasks - it contains Object In Hospital Room annotations for 826 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The data consist of response times and accuracies, and eye movement parameters (e.g., latency to final fixation) from human participants during a series of laboratory tasks. The aim of this project was to advance our understanding of when and why humans succeed and fail to use their "theory of mind" abilities to support interpretation of language. The director task (Apperly et al., 2010; Keysar et al., 2003) was employed in all of our experiments to capture egocentrism during communication in terms of behavioural responses and eye movements. Part 1: effect of overt task instruction on the degree of egocentrism (Exp1: 1-step instruction, Exp2 2-step instruction). Magnitude of common ground varied from 3-9 but data were collapsed over this factor for analysis of effects of instruction. Part 2: cognitive factors associated with perspective-taking (Exp1: magnitude of common ground ranging from 3 to 9 items, Exp2: relative magnitude of common ground versus privileged ground, ranging from 5 to 11 items, Exp3: linguistic complexity in the director’s speech). Part 3: linguistic complexity manipulation with a developmental sample (in press in JECP). Part 4: cross-cultural similarities and differences between English and Taiwanese participants (including an adapted version of the director task with an informed and an ignorant director and a perspective-switching component between these directors). This study also included a level-1 visual perspective-taking task (Samson, Apperly, et al., 2010). We were able to measure altercentric interference, which is spontaneous accounting of other’s perspective, on both tasks. Part 5: memory factor in perspective-taking (there was an additional memory demand on the director task along with systematic variation of the relative magnitude of common ground versus privileged ground between 3 and 9 items. OSPAN was carried out as a working memory measure).
Theory of Mind (ToM) is the ability to think about what others see, know, think, want and intend, and is thought to be a fundamental basis of social interaction and communication. ToM has been widely studied in young children and infants, and more recently its cognitive and neural basis has begun to be studied in adults. The main paradigm for this work requires participants to follow instructions from a speaker who does not fully share their visual perspective on the scene under discussion. Critical instructions have different meanings depending on whether or not participants successfully take the speaker's perspective into account. Previous work by the applicants, their collaborator at the University of Chicago, and others, shows that adults frequently show errors when following such instructions, and such difficulty is also observed in more sensitive measures, based on participants' eye movements during the tasks. By adapting these tasks, the new findings will provide insights about: (1) the extent and limits of adults' abilities to use their ToM; (2) how these limits vary between cultures (Western versus Chinese); (3) how they change through children's development into adults; (4) whether people who are good at ToM-use have generally better social abilities.
etdb_v1.0Fixation based eye-tracking data.MetadataAdditional metadata for each dataset.meta.csvRead gaze data with pythonExample usage:
import fixmat baseline, meta = fixmat.load('etdb_v1.0.hdf5', 'Baseline')fixmat.pyRead gaze data with matlabExample usage: baseline = get_fixmat('etdb_v1.0.hdf5', 'Baseline')get_fixmat.mAdditional MetadataThis zip file contains additional metadata for those studies in the data base that provide it.additional_meta.zipStimuli/6Stimuli from category 6Stimuli_6.zipStimuli/7Stimuli from category 7Stimuli_7.zipStimuli/8Stimuli from category 8Stimuli_8.zipStimuli/10Stimuli from category 10Stimuli_10.zipStimuli/11Stimuli from category 11Stimuli_11.zipStimuli/12Stimuli from category 12Stimuli_12.zipStimuli/14Stimuli from category 14Stimuli_14.zipStimuli/15Stimuli from category 15Stimuli_15.zipStimuli/16Stimuli from category 16Stimuli_16.zipStimuli/17Stimuli from category 17Stimuli_17.zipStimuli/18Stimuli from category 18Stimuli_...
https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy
According to cognitive market research, the global eye tracking market size is USD 864.12 million in 2024 and will expand at a compound annual growth rate (CAGR) of 33.4% from 2024 to 2031. Market Dynamics of Eye Tracking Market
Key Drivers for Eye Tracking Market
Eye-Tracking Devices are Increasingly Being Used in Consumer Research and Advertising - One of the main reasons the eye tracking market is growing is eye-tracking devices are increasingly being used in consumer research and advertising. The use of eye-tracking technologies in market research has grown significantly. Eye-tracking technology is becoming more and more popular in retail, especially in the FMCG industry, as a way to boost sales. In a retail setting, eye-tracking sensors and associated algorithms are being utilized to gather information about customer behavior. These tools and algorithms assist in determining factors such as the length of time a customer spends perusing a product, the ideal product arrangement to encourage a purchase at a store, and the packaging that offers the most useful product information to the customer. Eye tracking is also used in web marketing campaigns and the assessment of print, digital, and signage promotions.
Eye tracking technology is becoming more widely used in consumer research and advertisements. to drive the eye tracking market's expansion in the years ahead.
Key Restraints for Eye Tracking Market
Expanding market for gesture recognition technology poses a serious threat to the eye tracking industry.
The market also faces significant difficulties related to disturbance within the supply chain.
Introduction of the Eye Tracking Market
Keeping a watch on eye movement is the essence of eye tracking. It investigates our gaze patterns, attention spans, and blink rates. It also looks at the student's responses to various stimuli. Although the theory is straightforward, the application and justification may not be. The primary tool used in eye tracking data collection is an "eye tracker," which can be remote or head-mounted and connected to a computer. The main components of these eye trackers are a light source and a camera. The light source—which is primarily infrared—is directed toward the eye, and the camera records both its expression and the visible features of the eye, such as the pupil. These kinds of eye trackers are now non-intrusive thanks to technological advances.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains behavioural and eye-tracking data for:Murphy PR, Wilming N, Hernandez Bocanegra DC, Prat Ortega G & Donner TH (2021). Adaptive circuit dynamics across human cortex during evidence accumulation in changing environments. Nature Neuroscience. Online ahead of print.
Each .zip file contains all data for a single participant and is organized as follows: data from each experimental session are contained in their own folder (S1, S2, etc.); each session folder in turn contains separate Sample_seqs, Behaviour and Eyetracking subfolders.
The Sample_seqs folder contains Matlab .mat files (labelled ID_SESS_BLOCK.mat, where ID is the participant ID, SESS is the experimental session and BLOCK is the block number within that session) with information about the trial-specific stimulus sequences presented to the participant. The variables in each of these files are:
gen – structure containing the generative statistics of the task
stim – structure containing details about the physical presentation of the stimuli (see task script on Donnerlab Github for explanation of these)
timing – structure containing details about the timing of stimulus presentation (see task script on Donnerlab Github for explanation of these)
pshort – proportion of trials with stimulus sequences that were shorter than the full sequence length
stimIn – trials*samples matrix of stimulus locations (in polar angle with horizontal midline = 0 degrees; NaN marks trials sequences that were shorter than the full sequence length)
distseqs – trials*samples matrix of which generative distribution was used to draw each sample location
pswitch – trials*samples matrix of binary flags marking when a switch in generative distribution occurred
The Behaviour folder contains Matlab .mat files (same naming scheme as above) with information about the behaviour produced by the participant on each trial of the task. The main variable in each file is a matrix called Behav for which each row is a trial and columns are the following:
column 1 – the generative distribution used to draw the final sample location on each trial (and thus, the correct response)
column 2 – the response given by the participant
column 3 – the accuracy of the participant’s response
column 4 – response time relative to Go cue
column 5 – trial onset according to psychtoolbox clock
column 6 – number of times participant broke fixation during trial, according to online detection algorithm
Each .mat file also contains a trials*samples matrix (tRefresh) of the timings of monitor flips corresponding to the onsets of each sample (and made relative to trial onset), as provided by psychtoolbox.
The Eyetracking folder contains both raw Eyelink 1000 (SR Research) .edf files, and their conversions to .asc text files using the manufacturer’s edf2asc utility (same naming scheme as above). For stimulus and response trigger information, see task scripts on Donnerlab Github..zip file names ending in _2.zip correspond to the four participants from Experiment 2 of the paper, for whom sample-onset-asynchrony (SOA) was manipulated across two conditions (0.2 vs 0.6 s). All other participants are from Experiment 1, where SOA was fixed at 0.4 s.For example code for analyzing behaviour, fitting behavioural models, and analyzing pupil data, see https://github.com/DonnerLab/2021_Murphy_Adaptive-Circuit-Dynamics-Across-Human-Cortex.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The ETDD70 dataset comprises eye-tracking recordings from 70 Czech participants, equally divided into dyslexic and non-dyslexic readers, all aged 9–10 years. The dataset captures eye movements during three text-reading tasks in Czech: syllable reading (Task 1), meaningful text reading (Task 4), and pseudo-text reading (Task 5).
This dataset is the result of the project “Diagnostics of Dyslexia Using Eye-Tracking and Artificial Intelligence” conducted by our research team. The project aims to leverage artificial intelligence tools and advanced technical equipment (eye tracking) to more effectively diagnose dyslexia, one of the most common specific learning disorders, and thereby significantly improve re-education strategies for dyslexic students. The primary goal is to develop models that accurately distinguish between dyslexic and non-dyslexic readers based on eye movement patterns recorded during these tasks.
Data collection took place between October 2022 and August 2023, adhering to ethical standards. The project was approved by the Research Ethics Committee of Masaryk University in Brno, Czech Republic.
Please contact us if you have any questions or feedback at nicol.dostalova@mail.muni.cz or at svaricek@phil.muni.cz.
The ETDD70 dataset is freely available for research purposes.
PARTICIPANTS
The eye-tracking data were captured from 70 participants: 35 dyslexic and 35 non-dyslexic readers. In all cases, participants are elementary school pupils aged 9-10 years (i.e., 4th grade of elementary school). Recruitment of suitable participants was conducted in cooperation with a psychological counseling center, which facilitated the recruitment of pupils diagnosed with dyslexia. The non-dyslexic readers, who showed no symptoms of dyslexia, were recruited in cooperation with the counseling facilities of selected elementary schools. The dataset was collected from October 2022 to August 2023. The legal representatives of all participants were properly informed about the research procedure and agreed to participate in the study, for which they subsequently received compensation.
STIMULI
We designed three verbal tasks based on standardized paper-based dyslexia diagnostics used in the Czech Republic. These source texts were transferred to a digital version in a controlled form (e.g., amount of text, font size, line spacing, background color, etc.) for the requirements of eye-tracking measurements.
Task called Syllables contains 90 syllables arranged in a 9 x 10 matrix. The syllables are commonly encountered in the Czech language. The individual rows of syllables were categorized according to syllable composition (based on linguistic aspects) as follows: open syllables with no meaning, i.e., consonant + vowel (e.g., "ta," "na"), closed syllables with a central vowel bearing a meaning, i.e., consonant + vowel + consonant (e.g., "suk," "mák"), meaningless syllables consisting of two consonants (e.g., "vl," "bz"), a meaningless syllable formed by a cluster of two consonants ending in a vowel (e.g., "tle," "mra"), and finally a meaningful syllable formed by a cluster of three consonants with one vowel in the 3rd position (e.g., "mrak," "vlak"). All syllables were presented in black font, with Times New Roman on a gray background. The objective of the task is to read aloud all syllables from left to right and from top to bottom. A fixation cross was placed in the lower right corner for gaze-contingent task closure—when the participant looks at this cross, the recording is automatically terminated.
Task called MeaningfulText consists of a passage about a young boy who watches a squirrel from his window. This text is intended for elementary school readers in grades 3 and 4. The stimulus text contains a total of seven text lines with six logical sentences. The text is again written in black-colored font with double line spacing on a grey background and the fixation cross in the lower right corner. The aim of the task is to read the entire text aloud.
Task called PseudoText comprises fictional, meaningless words. This text has a total of seven lines with 15 artificial sentences. The text formatting, as well as the ending fixation cross, are the same as in Task MeaningfulText. The objective of the task is to read the entire text aloud as smoothly as possible.
EYE-TRACKING FEATURES
The raw eye-tracking data recorded for each task were further processed to extract event-based characteristics—fixations, saccades, and dozens of derived statistical characteristics. The fixations were detected using the i2mc algorithm (Hessels et al., 2017), as it was specifically designed to be noise-robust for measurements in children. The minimum fixation duration was set to 40 ms. The derived characteristics provide additional information about how participants interact with text. These characteristics are divided into whole-task and region-of-interest (ROI) characteristics. While the whole-task characteristics describe the semantics on the global level of the whole screen, the ROI ones characterize the semantics on the local level of a small rectangular area.
Feature-based characteristics for each task:
Syllables
First fixation duration, average fixation duration, number of fixations, number of fixations and saccades without the incoming/outgoing saccade, number of revisits—incoming saccades hitting this ROI from outside.
MeaningfulText, PseudoText
Whole-task (features extracted from the whole trial): number of regressions, ratio of progressive to regressive saccades, average saccadic amplitude, total reading duration, average fixation duration, number of fixations.
ROI (features extracted for separated regions of interest, i.e. lines and words): average fixation duration, number of fixations, number of revisits—incoming saccades hitting this ROI from outside, landing position of the first fixation.
AI CLASSIFICATION APPROACH
The AI-based methods used for the classification of dyslexia are available here: https://gitlab.fi.muni.cz/xsedmid/dyslex" href="https://gitlab.fi.muni.cz/xsedmid/dyslex" target="_blank" rel="noreferrer noopener">https://gitlab.fi.muni.cz/xsedmid/dyslex
CITE THIS DATASET
Dostalova, N., Svaricek, R., Sedmidubsky, J., Culemann, W., Sasinka, C., Zezula, P., & Cenek, J. (2024). ETDD70: Eye-tracking Dyslexia Dataset [Data set]. Zenodo. https://doi.org/10.5281/zenodo.13332134
CITE THE ASSOCIATED PAPER
Sedmidubsky, J., Dostalova, N., Svaricek, R., & Culemann, W. (2024). ETDD70: Eye-tracking dataset for classification of dyslexia using AI-based methods. In Proceedings of the 17th International Conference on Similarity Search and Applications (SISAP) (pp. 1-14). Springer.
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This dataset contains collected eye tracking data from medical practitioners and students with different expertise levels to understand their electrocardiogram interpretation behavior. These medical practitioners encounter electrocardiograms during their daily medical practice. The end goal of the collection of this dataset was to use it for analysis to uncover insights about key best practices as well as pitfall in the ECG interpretation process. The data consists of quantitative eye tracking measurements for different eye gaze features for 63 participants. These participants belong to six expertise categories, namely: medical students, nurses, technicians, residents, fellows and finally consultants. Each one of these participants contributed to the data collection by interpreting ten different electrocardiograms. The eye tracking data includes the eye fixations count and duration, the eye gaze duration, and finally the fixations revisitations. It was collected at 60 frames per second using a Tobii Pro X2-60. Each participant was allowed a timeframe of 30 seconds to interpret each one of the ten ECGs. The collected data was processed taking into consideration two defined area of interest distributions. This enabled the extraction of metrics specifically tailored for the 12-lead electrocardiogram.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset contains eye-tracking data from a two subjects (an expert and a novice teachers), facilitating three collaborative learning lessons (2 for the expert, 1 for the novice) in a classroom with laptops and a projector, with real master-level students. These sessions were recorded during a course on the topic of digital education and learning analytics at [EPFL](http://epfl.ch).
This dataset has been used in several scientific works, such as the [CSCL 2015](http://isls.org/cscl2015/) conference paper "The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures", by Luis P. Prieto, Kshitij Sharma, Yun Wen & Pierre Dillenbourg. The analysis and usage of this dataset is available publicly at https://github.com/chili-epfl/cscl2015-eyetracking-orchestration