9 datasets found
  1. Eye Tracking Autism

    • kaggle.com
    Updated Oct 9, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamadreza Momeni (2023). Eye Tracking Autism [Dataset]. https://www.kaggle.com/datasets/imtkaggleteam/eye-tracking-autism
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 9, 2023
    Dataset provided by
    Kaggle
    Authors
    Mohamadreza Momeni
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Abstract:

    This study aims to publish an eye-tracking dataset developed for the purpose of autism diagnosis. Eye-tracking methods are used intensively in that context, whereas abnormalities of the eye gaze are largely recognized as the hallmark of autism. As such, it is believed that the dataset can allow for developing useful applications or discovering interesting insights. As well, Machine Learning is a potential application for developing diagnostic models that can help detect autism at an early stage of development.

    Dataset Description:

    The dataset is distributed over 25 CSV-formatted files. Each file represents the output of an eye-tracking experiment. However, a single experiment usually included multiple participants. The participant ID is clearly provided at each record at the ‘Participant’ column, which can be used to identify the class of participant (i.e., Typically Developing or ASD). Furthermore, a set of metadata files is included. The main metadata file, Participants.csv, is used to describe the key characteristics of participants (e.g. gender, age, CARS). Every participant was also assigned a unique ID.

    Dataset Citation:

    Cilia, F., Carette, R., Elbattah, M., Guérin, J., & Dequen, G. (2022). Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder. In Proceedings of the IJCAI–ECAI Workshop on Scarce Data in Artificial Intelligence for Healthcare (SDAIH).

    Authors:

    Federica Cilia; Romuald Carette; Mahmoud Elbattah; Jean-Luc Guérin; Gilles Dequen

  2. u

    Visual Enjoyment Workshop Data

    • figshare.unimelb.edu.au
    xlsx
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ANDREW ANDERSON (2025). Visual Enjoyment Workshop Data [Dataset]. http://doi.org/10.26188/29232611.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    The University of Melbourne
    Authors
    ANDREW ANDERSON
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Participant data (eye condition, age, sex, visual acuity and Impact of Visual Impairment survey responses) for a study involving facilitated group discussions about visual enjoyment (The University of Melbourne Human Research Ethics Committee ID #28687).

  3. E

    Data from: Eye-Tracking Recordings from a Pilot Study of WMT-style MT...

    • live.european-language-grid.eu
    binary format
    Updated Mar 30, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Eye-Tracking Recordings from a Pilot Study of WMT-style MT Outputs Ranking [Dataset]. https://live.european-language-grid.eu/catalogue/corpus/1123
    Explore at:
    binary formatAvailable download formats
    Dataset updated
    Mar 30, 2016
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This package contains the eye-tracker recordings of 8 subjects evaluating English-to-Czech machine translation quality using the WMT-style ranking of sentences.

    We provide the set of sentences evaluated, the exact screens presented to the annotators (including bounding box information for every area of interest and even for individual letters in the text) and finally the raw EyeLink II files with gaze trajectories.

    The description of the experiment can be found in the paper:

    Ondřej Bojar, Filip Děchtěrenko, Maria Zelenina. A Pilot Eye-Tracking Study of WMT-Style Ranking Evaluation.

    Proceedings of the LREC 2016 Workshop “Translation Evaluation – From Fragmented Tools

    and Data Sets to an Integrated Ecosystem”, Georg Rehm, Aljoscha Burchardt et al. (eds.). pp. 20-26. May 2016, Portorož, Slovenia.

    This work has received funding from the European Union's Horizon 2020 research

    and innovation programme under grant agreement no. 645452 (QT21). This work was

    partially financially supported by the Government of Russian Federation, Grant

    074-U01.

    This work has been using language resources developed, stored and distributed

    by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of

    the Czech Republic (project LM2010013).

  4. R

    Dry eye disease diagnosis based on criteria recommended in Dry Eye Workshop...

    • repod.icm.edu.pl
    tsv, txt, zip
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Garaszczuk, Izabela; Jarosz, Karolina (2025). Dry eye disease diagnosis based on criteria recommended in Dry Eye Workshop II report [Dataset]. http://doi.org/10.18150/TT8JVD
    Explore at:
    zip(179291163), tsv(7935), zip(183635496), txt(45537), zip(3488958476), zip(252054095), zip(79950720)Available download formats
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    RepOD
    Authors
    Garaszczuk, Izabela; Jarosz, Karolina
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    National Science Centre (Poland)
    Description

    The dataset was generated as a results of an experimental phase of a project aimed at developing a new, objective method to diagnose dry eye disease, based on Tear Clearance Rate (https://doi.org/10.18150/NZG1ZS).This particular dataset was used to categorize participants into two subgroups, which includes:the control group (CNTRL) dry eye disease (DED) group.Subjects were divided based on the protocol recommended in the Dry Eye Workshop II report (reference: Wolffsohn, James S., et al. "TFOS DEWS II diagnostic methodology report", The Ocular Surface 15.3 (2017): 539-574) Each subject was given a code in a following format: XX_GROUP, where XX is participant's unique number and GROUP indicates their respective subgroup (DED or CNTRL)The dataset contains six files, including:(1) Lipid_Layer.ZIP: Video recordings (MKV format) of the ocular surface visualizing the lipid layer of the tear film using a thin layer white light interferometry method, capturing several blinks and the movement of the lipid layer. These recordings were used to assess lipid layer thickness as thin, normal or thick based on the observed interferenc pattern.(2) Meibography.ZIP: Black-and-white images (BMP format) of the everted lower and upper eyelids (marked as Lower or Upper, respectively), captured using non-contact infrared meibography with Oculus Keratograph 5M, allowing visualization of the Meibomian glands. Images were given scores in Meiboscale by Heiko Pult.(3) NIKBUT.ZIP: with recordings of a Placido disks pattern reflected from the ocular surface, recorded for each participant frame by frame (BMP format). These recordings were used to determine NIKBUT (Noninvasive Keratograph Tear Film Break-Up Time) in seconds. Two measurements per subject were performed(marked as NIKBUT_1 and NIKBUT_2) and average First and Mean NIKBUT were saved for each subject. The number of frames (BMPs) in each sequence is different, depending on how much time each subject kept their eyes open, but is not larger than 394 frames, which corresponds to 24-second-long recording.(4) Ocular_Redness.ZIP: Images of the ocular surface (in PNG format), used to assess ocular surface rendness score with Oculus Keratograph 5M(5) Tear_Meniscus_Height.ZIP: Black-and-white images of the ocular surface (PNG format) based on which measurements of tear meniscus height (in milimeters) were performed with an in-built feature of Oculus Keratograph 5M(6) Results.csv: with a summary of numerical data sampled with ocular symptom questionnaires (including Ocular Surface Disease Inded and Dry Eye Questionnaire-5), demographi data, laboratory stats, tear osmolarity, NIKBUT estimation, Meiboscale and other results recorded based on this dataset.

  5. H

    PASO: Porcine Anterior Segment OCT Dataset

    • dataverse.harvard.edu
    Updated Jul 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonas Nienhaus; Rebekka Peter; Florian Kapeller; Katharina Dettelbacher; Ryan Sentosa; Eleonora Tagliabue; Hessam Roodaki; Wolfgang Drexler; Thomas Schlegl; Franziska Mathis-Ullrich; Tilman Schmoll; Rainer Leitgeb (2025). PASO: Porcine Anterior Segment OCT Dataset [Dataset]. http://doi.org/10.7910/DVN/HUCGAE
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 21, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Jonas Nienhaus; Rebekka Peter; Florian Kapeller; Katharina Dettelbacher; Ryan Sentosa; Eleonora Tagliabue; Hessam Roodaki; Wolfgang Drexler; Thomas Schlegl; Franziska Mathis-Ullrich; Tilman Schmoll; Rainer Leitgeb
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    If you find this dataset useful, please cite the peer-reviewed paper, which has recently been accepted to the 12th OMIA Workshop on MICCAI 2025 (proper citation will follow). J. Nienhaus, R. Peter et al., "PASO: A Multipurpose Porcine Anterior Segment Dataset Featuring Spectral and Reconstructed OCT Volume Scans and Surgical Instrument Segmentation Masks", Accepted to the 12th OMIA Workshop on MICCAI 2025. The Porcine Anterior Segment OCT (PASO) dataset contains 141 raw and reconstructed (averaged) volume scans of ex-vivo porcine eyes, imaged with a microscope-integrated SS-OCT prototype, together with Python code for data loading, reconstruction and adjustable averaging. For a subset of 1020 scans from 19 porcine eye volumes, the dataset further offers annotations for Surgical Instrument Segmentation (SIS)

  6. f

    Data from: Association between systemic activity ındex and dry eye severity...

    • scielo.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dilay Ozek; Ozlem Evren Kemer; Ahmet Omma (2023). Association between systemic activity ındex and dry eye severity in patients with primary Sjögren syndrome [Dataset]. http://doi.org/10.6084/m9.figshare.7304441.v1
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    SciELO journals
    Authors
    Dilay Ozek; Ozlem Evren Kemer; Ahmet Omma
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Purpose: The aim of the present study was to compare the severity of ocular and systemic findings among patients with primary Sjögren syndrome. Methods: The study followed a prospective controlled design and comprised two groups; the test group included 58 eyes of 58 patients newly diagnosed with primary Sjögren syndrome with poor dry eye test findings and the control group included 45 right eyes of 45 healthy age- and sex-matched individuals. The ocular surface disease index score, tear osmolarity, Schirmer I test without anesthesia, fluorescein tear breakup time, and cornea-conjunctiva staining with lissamine green (van Bijsterveld scoring) were used to examine tear function in the patients via a complete ophthalmological examination. The results were graded and classified on the basis of a Dry Eye WorkShop report and results of the corneal and conjunctival staining test, Schirmer’s test, and fluorescein tear breakup time test. Discomfort, severity and frequency of symptoms, visual symptoms, conjunctival injection, eyelid-meibomian gland findings, and corneal-tear signs were interpreted. Disease activity was scored per the EULAR Sjögren’s syndrome disease activity index (ESSDAI) via systemic examination and laboratory evaluations, and the EULAR Sjögren’s syndrome patient-reported index (ESSPRI) assessed via a survey of patient responses. Results: Mean patient age was 48.15 ± 16.34 years in the primary Sjögren syndrome group and 44.06 ± 9.15 years in the control group. Mean fluorescein tear breakup time was 4.51 ± 2.89s in the primary Sjögren syndrome group and 10.20 ± 2.39 s in the control group. Mean Schirmer I test result was 3.51 ± 3.18 mm/5 min in the primary Sjögren syndrome group and 9.77±2.30 mm/5 min in the control group. Mean ocular surface disease index score was 18.56 ± 16.09 in the primary Sjögren syndrome group, and 19.92 ± 7.16 in the control group. Mean osmolarity was 306.48 ± 19.35 in the primary Sjögren syndrome group, and 292.54 ± 10.67 in the control group. Mean lissamine green staining score was 2.17 ± 2.76 in the primary Sjögren syndrome group, and 0.00 in the control group. Statistically significant differences were found berween the primary Sjögren syndrome group and control group in terms of fluorescein tear breakup time, Schirmer’s test, lissamine green staining, and osmolarity tests (p=0.036, p=0.041, p=0.001, and p=0.001 respectively). The Dry Eye WorkShop score was 2.15 ± 0.98, the EULAR Sjögren’s syndrome disease activity index score was 11.18 ± 4.05, and the EULAR Sjögren’s syndrome patient-reported index score was 5.20±2.63. When potential associations of the Dry Eye Workshop Study scores and osmolarity scores with the Eular Sjögren’s syndrome disease activity index scores were evaluated, the results were found to be statistically significant (p=0.001, p=0.001 respectively). Conclusion: The results showed an association between dry eye severity and systemic activity index in primary Sjögren syndrome patients.

  7. e

    IMC Leeds 2022 workshop: Bridging the Borders: Fifty Shades of Black (Ink) -...

    • b2find.eudat.eu
    Updated Feb 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). IMC Leeds 2022 workshop: Bridging the Borders: Fifty Shades of Black (Ink) - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/9b46a0ca-80e6-5082-80a2-5927fa434df9
    Explore at:
    Dataset updated
    Feb 14, 2024
    Area covered
    Leeds
    Description

    Bridging the Borders: Fifty Shades of Black (Ink) - A Workshop Medieval manuscripts are written with different black and brown ink types, which usually vary between the scribes. Analysing different scribal hands is a basic tool in the study of medieval texts, they are however strongly limited by what the human eye can see. This workshop provides a hand-on introduction on how scholars can benefit from a more profound understanding of inks. Technical developments in the medieval production of such inks were rarely constrained by borders or cultural divisions. Therefore, the content and formal characteristics of ink recipes are mostly independent from the culture that created them and can be used to explore how knowledge and techniques are transmitted, both within the same cultural environments and from one culture to another. This workshop aims to bring participants across another border which is often improperly considered intimidating: the divide between Science and Humanities. Experimentation is crucial in order to truly understand textual recipes from different manuscript cultures, to fully appreciate which ingredients are needed and in which proportions, to assess feasibility, and even to spot errors in the transmission process. Moreover, analytical techniques are needed to identify the materials employed in inks and see how inks used in manuscripts compare with their recipes. Using such techniques, scientific methods can support scholars to differentiate hands or stages of production within the same manuscript, or to compare and identify copies from a same scribe or scriptorium, by discriminating among diverse ink typologies. In the first part of this workshop, the tutors will investigate the nature of ink recipes produced during medieval times from China to Europe, by different cultures and written in different languages (Chinese, Arabic, Hebrew, Greek, Latin, German, Italian), to observe similarities and differences and bring attention to the issues and challenges that those texts pose to the practical replication of their recipes. In the second part, the participants will receive a practical demonstration of ink production and will look at the raw ingredients used. Then, everyone will be invited to test ink samples on a variety of supports (papyrus, parchment and papers) with various writing implements (brush, reed pen, feather). Finally, a practical introduction to reflectography and the handson use of the Dino Lite microscope will allow participants to try out their own inkdetection by analysing known and unknown ink samples with the supplied equipment. Participants are invited to bring examples from their own manuscripts. The workshop is organised by the Cluster of Excellence ‘Understanding Written Artefacts’ that follows a comparative approach for studying how the production of written artefacts has shaped human societies and cultures, and how these in turn have adapted written artefacts to their needs. The research for this workshop was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2176 'Understanding Written Artefacts: Material, Interaction and Transmission in Manuscript Cultures', project no. 390893796. The research was conducted within the scope of the Centre for the Study of Manuscript Cultures (CSMC) at Universität Hamburg.

  8. f

    Data from: Translation and validation of the Portuguese version of a dry eye...

    • scielo.figshare.com
    jpeg
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julia Silvestre de Castro; Iara Borin Selegatto; Rosane Silvestre de Castro; José Paulo Cabral de Vasconcelos; Carlos Eduardo Leite Arieta; Mônica Alves (2023). Translation and validation of the Portuguese version of a dry eye disease symptom questionnaire [Dataset]. http://doi.org/10.6084/m9.figshare.7101956.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SciELO journals
    Authors
    Julia Silvestre de Castro; Iara Borin Selegatto; Rosane Silvestre de Castro; José Paulo Cabral de Vasconcelos; Carlos Eduardo Leite Arieta; Mônica Alves
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Purposes: A symptom questionnaire is an important tool used to quantify and qualify the impact of a disease on a patient's related quality of life and to estimate the prevalence of a certain condition within a population. Ophthalmologists frequently encounter patients with dry eye disease (DED), and therefore, evaluating the symptoms reported by these patients influences diagnosis, therapeutic monitoring, and evaluations of disease progression. The latest consensus on dry eye (Dry Eye Workshop, DEWS), published in 2007, led to the standardization of several questionnaires and a better understanding of the prevalence, severity, and overall effect of DED on the patient's quality of life. Methods: In this study, we translated into Portuguese a symptom questionnaire from DEWS that has already been used in several other population-based studies. For subsequent validation, the translated questionnaire was applied by two independent observers to a population of 30 subjects, and the results were compared in a concordance analysis. Results: The processes of translating to Portuguese and back translating the dry eye symptom questionnaire were conducted without difficulty. The high-correlation coefficients obtained when comparing the results of the initial application and the re-administration of this questionnaire to a sample of 30 individuals indicated excellent concordance with regard to results, repeatability, and reliability. Conclusions: This translated and validated questionnaire can be applied to a larger population with the intent to determine the prevalence of DED symptoms in the overall Brazilian population, as well as in distinct regions of the country.

  9. Speaker Gaze and Trust

    • figshare.com
    • search.datacite.org
    txt
    Updated Oct 4, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helene Kreysa; Luise Kessler; Stefan R. Schweinberger (2016). Speaker Gaze and Trust [Dataset]. http://doi.org/10.6084/m9.figshare.3100120.v5
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 4, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Helene Kreysa; Luise Kessler; Stefan R. Schweinberger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data and supplementary materials presented here form the basis of our paper "Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements" (H. Kreysa, L. Kessler, & S.R. Schweinberger, 2016, PLoS ONE, 11(9), e0162291. doi:10.1371/journal.pone.0162291.).As described in the paper (downloadable here), 35 student participants indicated by button press whether or not they believed a truth-ambiguous statement, uttered by a speaker in one of 36 short video clips. Importantly, the speaker sometimes looked directly into the camera, at other times she averted her gaze.1. DataWe present four datasets as tab-delimited text:1.1. Gaze_RESPrts.txt (responses and RTs for main experiment), with the following variables:- SubjectCode (N = 35)- Video (.avi)- Item (N = 36)- RT from response screen in ms- Response (yes/no)- Orientation (direct gaze / averted right / averted left)- debrief (mention of gaze direction in debrief questionnnaire)1.2 Audioonly_RESPrts.txt (responses and RTs for control experiment), with the following variables:- SubjectCode (N = 37)- Audio (.wav)- Item (N = 36)- RT from response screen in ms- Response (yes/no)- Orientation_original (direct gaze / averted)1.3 Gaze_ratings.txt (post-experimental ratings of the speaker's attributes, main experiment), with the following variables:- SubjectCode (N = 35)- Attribute (6 levels)- Rating-recoded (1: lowest - 6: highest)- Response_time- Rating_trial (6 per participant)- Num.yes (number of yes-responses in main experiment per participant, out of 36)1.4 Gaze_fixations.txt (total fixation time to each AoI during video presentation), with the following variables:- SubjectCode (N = 35)- Video (.avi)- AoI (Area of Interest assigned in SMI BeGaze, see attached example image "AOIs.bmp": eyes, bottomright, bottomleft, topright, topleft, whitespace)- Fixtime (total fixation time per region)Eye movements were recorded using an SMI iViewX Hi-Speed 500 tracker, and fixation events were extracted for each participant using SMI BeGaze (v. 3.4.52).2. Stimulus examplesTwo videos, one with direct gaze (1_Hund_direct.avi) and one with with averted gaze (mirrored to create right-averted gaze: 1_Hund_right.avi)3. Further Material- AOIs.bmp: image showing assignment of areas of interest on the videos- HK_Poster_gaze&trust_PPRU2015.pdf: poster presented at the XIth Workshop of the Person Perception Research Unit, Jena, Germany (Workshop XI (April, 9-10, 2015): “Human Communication: From Person Perception to Social Action”)Please email me if you require any further information (helene.kreysa@uni-jena.de).

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mohamadreza Momeni (2023). Eye Tracking Autism [Dataset]. https://www.kaggle.com/datasets/imtkaggleteam/eye-tracking-autism
Organization logo

Eye Tracking Autism

Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder

Explore at:
169 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Oct 9, 2023
Dataset provided by
Kaggle
Authors
Mohamadreza Momeni
License

Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically

Description

Abstract:

This study aims to publish an eye-tracking dataset developed for the purpose of autism diagnosis. Eye-tracking methods are used intensively in that context, whereas abnormalities of the eye gaze are largely recognized as the hallmark of autism. As such, it is believed that the dataset can allow for developing useful applications or discovering interesting insights. As well, Machine Learning is a potential application for developing diagnostic models that can help detect autism at an early stage of development.

Dataset Description:

The dataset is distributed over 25 CSV-formatted files. Each file represents the output of an eye-tracking experiment. However, a single experiment usually included multiple participants. The participant ID is clearly provided at each record at the ‘Participant’ column, which can be used to identify the class of participant (i.e., Typically Developing or ASD). Furthermore, a set of metadata files is included. The main metadata file, Participants.csv, is used to describe the key characteristics of participants (e.g. gender, age, CARS). Every participant was also assigned a unique ID.

Dataset Citation:

Cilia, F., Carette, R., Elbattah, M., Guérin, J., & Dequen, G. (2022). Eye-Tracking Dataset to Support the Research on Autism Spectrum Disorder. In Proceedings of the IJCAI–ECAI Workshop on Scarce Data in Artificial Intelligence for Healthcare (SDAIH).

Authors:

Federica Cilia; Romuald Carette; Mahmoud Elbattah; Jean-Luc Guérin; Gilles Dequen

Search
Clear search
Close search
Google apps
Main menu