3 datasets found
  1. Perception of vowel sounds in children with autism spectrum disorders and...

    • openneuro.org
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kirill A. Fadeev; Ilacai V. Romero Reyes; Dzerassa D. Goiaeva; Tatyana S. Obukhova; Tatyana M. Ovsiannikova; Andrey O. Prokofyev; Tatyana A. Stroganova; Elena V. Orekhova (2024). Perception of vowel sounds in children with autism spectrum disorders and typically developing children (MEG/ERF study) [Dataset]. http://doi.org/10.18112/openneuro.ds005234.v2.0.0
    Explore at:
    Dataset updated
    Jun 11, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Kirill A. Fadeev; Ilacai V. Romero Reyes; Dzerassa D. Goiaeva; Tatyana S. Obukhova; Tatyana M. Ovsiannikova; Andrey O. Prokofyev; Tatyana A. Stroganova; Elena V. Orekhova
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Project name:

    Perception of vowel features (formant structure, pitch) in children with autism spectrum disorders and typically developing children (MEG/ERF study).

    Year(s) that the project ran:

    2017-2024.

    Brief overview:

    This dataset was obtained by the team at the Center for Neurocognitive Research (MEG Center) of Moscow State University of Psychology and Education as part of a study on vowel perception and their properties in children (Fadeev et al., 2024, in press). It includes MEG recordings from 35 children with autism spectrum disorders and 39 typically developing children.

    Experimental procedure:

    The participants were instructed to watch a silent video (movie/cartoon) of their choice and to ignore the auditory stimuli. The stimuli were delivered binaurally via plastic ear tubes inserted into the ear canals. The tubes were fixed to the MEG helmet to minimize possible noise from contact with the subject’s clothing. The intensity was set at a sound pressure level of 90 dB SPL. The experiment consisted of three blocks of 360 trials, each block lasting around 9 minutes with short breaks between blocks.

    Stimuli:

    The experimental paradigm used in this study is identical to that described in Orekhova et al. (2023). We used four types of synthetic vowel-like stimuli previously employed by Uppenkamp et al. (2006, 2011) and downloaded from http://medi.uni-oldenburg.de/members/stefan/phonology_1/. You can also find a copy of the sound files in the stimuli directory on this dataset page.. Five strong vowels were used: - /a/ (caw, dawn), - /e/ (ate, bait), - /i/ (beat, peel), - /o/ (coat, wrote) and - /u/ (boot, pool).

    A total of 270 stimuli from each of the four classes were presented, with three stimulus variants equally represented within each class (N = 90). All stimuli were presented in random order. Each stimulus lasted 812 ms, including rise/fall times of 10 ms each. The interstimulus intervals (ISI) were randomly chosen from a range of 500 to 800 ms.

    Short names of stimuli (used in code, filenames, and directory names): - dv - periodic vowels - rv - non-periodic vowels - mp - periodic non-vowels - mr - non-periodic non-vowels

    Additional data acquired:

    The following tests were also administered to the study participants: - Pure tone air conduction audiometry; - Russian Child Language Assessment Battery (Lopukhina et al., 2019); - Words-in-Noise (WiN) test (Fadeev et al., 2023); - Social Responsiveness Scale for children (Constantino, 2013); - Social Communication Questionnaire (SCQ-Lifetime) (Berument et al., 1999), - KABC-II, Mental Processing Index (MPI) as an IQ equivalent (Kaufman & Kaufman, 2004).

    Dataset content:

    MEG data

    sub<label>/meg/...meg.fif -- 3 runs (in some cases, the number of runs may differ due to the subjects' features). MEG data were recorded using Elekta VectorView Neuromag 306-channel MEG detector array (Helsinki, Finland) with 0.1 - 330 Hz inbuilt filters and 1000 Hz sample frequency.

    MRI data

    sub<label>/anat/ -- T1-weighted images MRI for subject.

    Freesurfer

    derivatives/freesurfer/ - outputs of running the FreeSurfer pipeline recon-all on the MRI data with no additional command line options (only defaults were used): $ recon-all -i sub-Z201_T1w.nii.gz -s Z201 -all After the recon-all call, there were further FreeSurfer calls from the MNE API: $ mne make_scalp_surfaces -s Z201 --force $ mne watershed_bem -s Z201

    Code

    The code of project has the following structure (directory names provide explanations of their contents):

    code
    ├── analysis
    │     ├── 0-preprocessing_for_clustering
    │     │     ├── ...
    │     ├── 1-tfce_clustering
    │     │     ├── ...
    │     ├── 2-clustering_results_analytics
    │     │     ├── ...
    │     └── modules
    │       ├── clustering.py
    │       └── data_ops.py
    ├── envs
    │     ├── envs_for_between_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
    │     ├── envs_for_interaction_clustering_in_auditory_cortex_with_morphological_sign_flip.json
    │     └── envs_for_within_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
    ├── preprocessing
    │     ├── 00-maxwell_filtering.py
    │     ├── ...
    │     └── 10-make_stc.py
    ├── README
    ├── requirements_for_ubuntu_2x.txt
    └── requirements_for_windows_1x.txt
    

    Please read code/README file for more detail instructions.

    Requirements for Code Usage (MNE-Python & Additional Python Libraries)"

    1. For installation, we recommend the Anaconda distribution. Find the installation guide here: Anaconda Installation Guide.

    2. After you have a working version of Python 3, simply install a new virtual environment via this command in the Ubuntu terminal or Anaconda Prompt for Windows OS:

    conda create --name=mne1.4 --channel=conda-forge python=3.10 mne=1.4.1 numpy=1.23.1 spyder pyvista pyvistaqt pingouin rpy2 mne-bids numpy openpyxl autoreject

    Or use the requirements_xxxxxx.txt files located in code directory: - For Ubuntu OS (version 20 and above), please use requirements_for_ubuntu_2x.txt. - For Windows OS (version 10 and above), please use requirements_for_windows_1x.txt.

    To do this, you can run the following command in the Ubuntu terminal or Anaconda Prompt for Windows OS:

    conda create --name mne1.4 --file requirements_filename

    1. Activate the created environment: conda activate mne1.4

    2. Start your favorite IDE with this environment. For example, to start Spyder IDE, use this command after activating the environment: spyder

    3. To start processing data, go to the directory with the downloaded data and open the Python scripts of interest. (If you are using Spyder IDE, use the "Files" pane located in the upper right corner of the workspace.)

    Clustering results directories

    All output files from the scripts of the code directory are saved in the derivatives directory.

    Clustering Versions mnemonics (phrases used in python scripts names and directory names): - v13: TFCE clustering based on groups ⨯ conditions difference response - v14: TFCE clustering based on between-conditions difference response - v15: TFCE clustering based on between groups evoked response

    The derivatives of project has the following structure (directory names provide explanations of their contents):

    derivatives/preprocessing/
    ├── fsaverage_labels_of_analytics
    │     ├── auditory_cortex_region-lh.label
    │     └── auditory_cortex_region-rh.label
    └── fsaverage_stcs_after_morph_flip_in_labels_of_analytics
      ├── subjects_info_for_morphological_sign_flipped_data_1000Hz
      └── subjects_stc_for_morphological_sign_flipped_data_1000Hz
    
    derivatives/analysis/
    ├── 20240607_74subj_v13_500Hz_5000_perm_DV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v13_500Hz_5000_perm_MP-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v13_500Hz_5000_perm_RV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v14_500Hz_5000perm_ASD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v14_500Hz_5000perm_ASD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v14_500Hz_5000perm_ASD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v14_500Hz_5000perm_TD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v14_500Hz_5000perm_TD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v14_500Hz_5000perm_TD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
    ├── 20240608_74subj_v15_500Hz_5000_perm_DV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
    ├── 20240608_74subj_v15_500Hz_5000_perm_MP_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
    └── 20240608_74subj_v15_500Hz_5000_perm_RV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
    
    

    Data user agreement:

    Dataset is distributed under the CC BY license.

    Acknowledgements:

    We sincerely thank all of volunteers who participated in this study.

    Funding:

    The study was funded within the framework of the state assignment of the Ministry of Education of the Russian Federation (N 073-00037-24-01).

    References:

    1. Gutschalk, A., & Uppenkamp, S. (2011). Sustained responses for pitch and vowels map to similar sites in human auditory cortex. Neuroimage, 56(3), 1578-1587. doi:10.1016/j.neuroimage.2011.02.026

    2. Orekhova, E. V., Fadeev, K. A., Goiaeva, D. E., Obukhova, T. S., Ovsiannikova, T. M., Prokofyev, A. O., & Stroganova, T. A. (2023). Different Hemispheric Lateralization for Periodicity and Formant Structure of Vowels in the Auditory Cortex and Its Changes between Childhood and Adulthood. Cortex. doi:10.1016/j.cortex.2023.10.020

    3. Uppenkamp, S., Johnsrude, I. S., Norris, D., Marslen-Wilson, W., & Patterson, R. D. (2006). Locating the initial stages of speech-sound processing in human temporal cortex. Neuroimage, 31(3), 1284-1296. doi:10.1016/j.neuroimage.2006.01.004

    4. Fadeev, K. A., Goyaeva, D. E., Obukhova, T. S.,

  2. f

    DataSheet1_Generative artificial intelligence model for simulating...

    • frontiersin.figshare.com
    pdf
    Updated Oct 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hiroyuki Yamaguchi; Genichi Sugihara; Masaaki Shimizu; Yuichi Yamashita (2024). DataSheet1_Generative artificial intelligence model for simulating structural brain changes in schizophrenia.pdf [Dataset]. http://doi.org/10.3389/fpsyt.2024.1437075.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Oct 4, 2024
    Dataset provided by
    Frontiers
    Authors
    Hiroyuki Yamaguchi; Genichi Sugihara; Masaaki Shimizu; Yuichi Yamashita
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundRecent advancements in generative artificial intelligence (AI) for image generation have presented significant opportunities for medical imaging, offering a promising way to generate realistic virtual medical images while ensuring patient privacy. The generation of a large number of virtual medical images through AI has the potential to augment training datasets for discriminative AI models, particularly in fields with limited data availability, such as neuroimaging. Current studies on generative AI in neuroimaging have mainly focused on disease discrimination; however, its potential for simulating complex phenomena in psychiatric disorders remains unknown. In this study, as examples of a simulation, we aimed to present a novel generative AI model that transforms magnetic resonance imaging (MRI) images of healthy individuals into images that resemble those of patients with schizophrenia (SZ) and explore its application.MethodsWe used anonymized public datasets from the Center for Biomedical Research Excellence (SZ, 71 patients; healthy subjects [HSs], 71 patients) and the Autism Brain Imaging Data Exchange (autism spectrum disorder [ASD], 79 subjects; HSs, 105 subjects). We developed a model to transform MRI images of HSs into MRI images of SZ using cycle generative adversarial networks. The efficacy of the transformation was evaluated using voxel-based morphometry to assess the differences in brain region volumes and the accuracy of age prediction pre- and post-transformation. In addition, the model was examined for its applicability in simulating disease comorbidities and disease progression.ResultsThe model successfully transformed HS images into SZ images and identified brain volume changes consistent with existing case-control studies. We also applied this model to ASD MRI images, where simulations comparing SZ with and without ASD backgrounds highlighted the differences in brain structures due to comorbidities. Furthermore, simulating disease progression while preserving individual characteristics showcased the model’s ability to reflect realistic disease trajectories.DiscussionThe results suggest that our generative AI model can capture subtle changes in brain structures associated with SZ, providing a novel tool for visualizing brain changes in different diseases. The potential of this model extends beyond clinical diagnosis to advances in the simulation of disease mechanisms, which may ultimately contribute to the refinement of therapeutic strategies.

  3. Transdiagnostic Connectome Project

    • openneuro.org
    Updated Jun 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sidhant Chopra; Carrisa V. Cocuzza; Connor Lawhead; Jocelyn A. Ricard; Loïc Labache; Lauren Patrick; Poornima Kumar; Arielle Rubenstein; Julia Moses; Lia Chen; Crystal Blankenbaker; Bryce Gillis; Laura T. Germine; Ilan Harpaz-Rote; BT Thomas Yeo; Justin T. Baker; Avram J. Holmes (2024). Transdiagnostic Connectome Project [Dataset]. http://doi.org/10.18112/openneuro.ds005237.v1.0.1
    Explore at:
    Dataset updated
    Jun 20, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sidhant Chopra; Carrisa V. Cocuzza; Connor Lawhead; Jocelyn A. Ricard; Loïc Labache; Lauren Patrick; Poornima Kumar; Arielle Rubenstein; Julia Moses; Lia Chen; Crystal Blankenbaker; Bryce Gillis; Laura T. Germine; Ilan Harpaz-Rote; BT Thomas Yeo; Justin T. Baker; Avram J. Holmes
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Transdiagnostic Connectome Project

    A richly phenotyped transdiagnostic dataset with behavioral and Magnetic Resonance Imaging (MRI) data from 241 individuals aged 18 to 70, comprising 148 individuals meeting diagnostic criteria for a broad range of psychiatric illnesses and a healthy comparison group of 93 individuals.

    These data include high-resolution anatomical scans and 6 x resting-state, and 3 x task-based (2 x Stroop, 1 x Faces/Shapes) functional MRI runs. Participants completed over 50 psychological and cognitive questionnaires, as well as a semi-structured clinical interview.

    Data was collected at the Brain Imaging Center, Yale University, New Haven, CT and McLean Hospital, Belmont, MA. This dataset will allow investigation into brain function and transdiagnostic psychopathology in a community sample.

    Inclusion Criteria

    Participants in the study met the following inclusion criteria:

    • Aged 18 to 64 years and spoke English
    • No metal contraindications, no history of concussion or prior neurological problems, no color-blindness
    • Prior history of affective or psychotic illness or no psychiatric history

    Exclusion criteria

    Participants meeting any of the criteria listed below were excluded from the study: * Neurological disorders * Pervasive developmental disorders (e.g., autism spectrum disorder) * Any medical condition that increases risk for MRI (e.g., pacemaker, dental braces) * MRI contraindications (e.g., claustrophobia pregnancy)

    Consent

    Institutional Review Board approval and consent were obtained. To characterise the sample, we collected data on race/ethnicity, income, use of psychotropic medication, and family history of medical or psychiatric conditions.

    Clinical Measures

    Completed by Participants:

    • Health and demographics questionnaire
    • Alcohol Tobacco Caffeine Use Questionnaire (ATC)
    • Broad Autism Phenotype Questionnaire (BAPQ-2)
    • Barratt Impulsiveness Scale (BIS)
    • Behavioral Inhibition/Activation Scale (BISBAS)
    • Childhood Trauma Questionnaire (CTQ)
    • Domain Specific Risk Taking (DOSPERT)
    • Fagerstrom Test for Nicotine Dependence (FTND)
    • NEO Five Factor Inventory (NEO-FFI)
    • Quick Inventory of Depressive Symptomatology (QIDS)
    • Multidimensional Scale for Perceived Social Support (MSPSS)
    • State-Trait Anxiety Inventory (STAI)
    • Temperament Character Inventory (TCI)
    • Anxiety Sensitivity Index (ASI)
    • Depression Anxiety Stress Scale (DASS)
    • Profile of Mood States (POMS)
    • Perceived Stress Scale (PSS)
    • Shipley Institute of Living Scale (Shipley)
    • Temporal Experience of Pleasure Scale (TEPS)
    • Cognitive Emotion Regulation Questionnaire (CERQ)
    • Cognitive Failures Questionnaire (CFQ)
    • Cognitive Reflections Test (CRT)
    • Experiences in Close Relationships Inventory (ECR)
    • Positive Urgency Measure (PUM)
    • Ruminative Responses Scale (RRS)
    • Retrospective Self-Report of Inhibition (RSRI)
    • Snaith-Hamilton Pleasure Scale (SHAPS)
    • Test My Brain (TMB)
    • Stroop Task (during fMRI)
    • Hammer Task (during fMRI)

    Completed by Clinicians:

    • Structured Clinical Interview for DSM-5 Disorder (SCID-5)
    • Clinical Global Impression (CGI)
    • Anxiety Symptom Chronicity (ASC)
    • Columbia Suicide Severity Rating Scale (CSSR-S)
    • Range of Impaired Functioning Tool (LIFE-RIFT)
    • Montgomery-Asberg Depression Rating Scale (MADRS)
    • Multnomah Community Ability Scale (MCAS)
    • Positive and Negative Syndrome Scale (PANSS)
    • Panic Disorder Severity Scale (PDSS)
    • Young Mania Rating Scale (YMRS)

    Clinical Measures Data

    Relevant clinical measures can be found in the phenotype folder, with each measure and its items described in the relevant _definition .csv file. The 'qc' columns indicate quality control checks done on each members (i.e., number of unanswered items by a participant.) '999' values indicate missing or skipped data.

    Detailed information and imaging protocols regarding the dataset can be found here: [Add preprint Link]

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Kirill A. Fadeev; Ilacai V. Romero Reyes; Dzerassa D. Goiaeva; Tatyana S. Obukhova; Tatyana M. Ovsiannikova; Andrey O. Prokofyev; Tatyana A. Stroganova; Elena V. Orekhova (2024). Perception of vowel sounds in children with autism spectrum disorders and typically developing children (MEG/ERF study) [Dataset]. http://doi.org/10.18112/openneuro.ds005234.v2.0.0
Organization logo

Perception of vowel sounds in children with autism spectrum disorders and typically developing children (MEG/ERF study)

Explore at:
Dataset updated
Jun 11, 2024
Dataset provided by
OpenNeurohttps://openneuro.org/
Authors
Kirill A. Fadeev; Ilacai V. Romero Reyes; Dzerassa D. Goiaeva; Tatyana S. Obukhova; Tatyana M. Ovsiannikova; Andrey O. Prokofyev; Tatyana A. Stroganova; Elena V. Orekhova
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

Project name:

Perception of vowel features (formant structure, pitch) in children with autism spectrum disorders and typically developing children (MEG/ERF study).

Year(s) that the project ran:

2017-2024.

Brief overview:

This dataset was obtained by the team at the Center for Neurocognitive Research (MEG Center) of Moscow State University of Psychology and Education as part of a study on vowel perception and their properties in children (Fadeev et al., 2024, in press). It includes MEG recordings from 35 children with autism spectrum disorders and 39 typically developing children.

Experimental procedure:

The participants were instructed to watch a silent video (movie/cartoon) of their choice and to ignore the auditory stimuli. The stimuli were delivered binaurally via plastic ear tubes inserted into the ear canals. The tubes were fixed to the MEG helmet to minimize possible noise from contact with the subject’s clothing. The intensity was set at a sound pressure level of 90 dB SPL. The experiment consisted of three blocks of 360 trials, each block lasting around 9 minutes with short breaks between blocks.

Stimuli:

The experimental paradigm used in this study is identical to that described in Orekhova et al. (2023). We used four types of synthetic vowel-like stimuli previously employed by Uppenkamp et al. (2006, 2011) and downloaded from http://medi.uni-oldenburg.de/members/stefan/phonology_1/. You can also find a copy of the sound files in the stimuli directory on this dataset page.. Five strong vowels were used: - /a/ (caw, dawn), - /e/ (ate, bait), - /i/ (beat, peel), - /o/ (coat, wrote) and - /u/ (boot, pool).

A total of 270 stimuli from each of the four classes were presented, with three stimulus variants equally represented within each class (N = 90). All stimuli were presented in random order. Each stimulus lasted 812 ms, including rise/fall times of 10 ms each. The interstimulus intervals (ISI) were randomly chosen from a range of 500 to 800 ms.

Short names of stimuli (used in code, filenames, and directory names): - dv - periodic vowels - rv - non-periodic vowels - mp - periodic non-vowels - mr - non-periodic non-vowels

Additional data acquired:

The following tests were also administered to the study participants: - Pure tone air conduction audiometry; - Russian Child Language Assessment Battery (Lopukhina et al., 2019); - Words-in-Noise (WiN) test (Fadeev et al., 2023); - Social Responsiveness Scale for children (Constantino, 2013); - Social Communication Questionnaire (SCQ-Lifetime) (Berument et al., 1999), - KABC-II, Mental Processing Index (MPI) as an IQ equivalent (Kaufman & Kaufman, 2004).

Dataset content:

MEG data

sub<label>/meg/...meg.fif -- 3 runs (in some cases, the number of runs may differ due to the subjects' features). MEG data were recorded using Elekta VectorView Neuromag 306-channel MEG detector array (Helsinki, Finland) with 0.1 - 330 Hz inbuilt filters and 1000 Hz sample frequency.

MRI data

sub<label>/anat/ -- T1-weighted images MRI for subject.

Freesurfer

derivatives/freesurfer/ - outputs of running the FreeSurfer pipeline recon-all on the MRI data with no additional command line options (only defaults were used): $ recon-all -i sub-Z201_T1w.nii.gz -s Z201 -all After the recon-all call, there were further FreeSurfer calls from the MNE API: $ mne make_scalp_surfaces -s Z201 --force $ mne watershed_bem -s Z201

Code

The code of project has the following structure (directory names provide explanations of their contents):

code
├── analysis
│     ├── 0-preprocessing_for_clustering
│     │     ├── ...
│     ├── 1-tfce_clustering
│     │     ├── ...
│     ├── 2-clustering_results_analytics
│     │     ├── ...
│     └── modules
│       ├── clustering.py
│       └── data_ops.py
├── envs
│     ├── envs_for_between_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
│     ├── envs_for_interaction_clustering_in_auditory_cortex_with_morphological_sign_flip.json
│     └── envs_for_within_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
├── preprocessing
│     ├── 00-maxwell_filtering.py
│     ├── ...
│     └── 10-make_stc.py
├── README
├── requirements_for_ubuntu_2x.txt
└── requirements_for_windows_1x.txt

Please read code/README file for more detail instructions.

Requirements for Code Usage (MNE-Python & Additional Python Libraries)"

  1. For installation, we recommend the Anaconda distribution. Find the installation guide here: Anaconda Installation Guide.

  2. After you have a working version of Python 3, simply install a new virtual environment via this command in the Ubuntu terminal or Anaconda Prompt for Windows OS:

conda create --name=mne1.4 --channel=conda-forge python=3.10 mne=1.4.1 numpy=1.23.1 spyder pyvista pyvistaqt pingouin rpy2 mne-bids numpy openpyxl autoreject

Or use the requirements_xxxxxx.txt files located in code directory: - For Ubuntu OS (version 20 and above), please use requirements_for_ubuntu_2x.txt. - For Windows OS (version 10 and above), please use requirements_for_windows_1x.txt.

To do this, you can run the following command in the Ubuntu terminal or Anaconda Prompt for Windows OS:

conda create --name mne1.4 --file requirements_filename

  1. Activate the created environment: conda activate mne1.4

  2. Start your favorite IDE with this environment. For example, to start Spyder IDE, use this command after activating the environment: spyder

  3. To start processing data, go to the directory with the downloaded data and open the Python scripts of interest. (If you are using Spyder IDE, use the "Files" pane located in the upper right corner of the workspace.)

Clustering results directories

All output files from the scripts of the code directory are saved in the derivatives directory.

Clustering Versions mnemonics (phrases used in python scripts names and directory names): - v13: TFCE clustering based on groups ⨯ conditions difference response - v14: TFCE clustering based on between-conditions difference response - v15: TFCE clustering based on between groups evoked response

The derivatives of project has the following structure (directory names provide explanations of their contents):

derivatives/preprocessing/
├── fsaverage_labels_of_analytics
│     ├── auditory_cortex_region-lh.label
│     └── auditory_cortex_region-rh.label
└── fsaverage_stcs_after_morph_flip_in_labels_of_analytics
  ├── subjects_info_for_morphological_sign_flipped_data_1000Hz
  └── subjects_stc_for_morphological_sign_flipped_data_1000Hz

derivatives/analysis/
├── 20240607_74subj_v13_500Hz_5000_perm_DV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v13_500Hz_5000_perm_MP-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v13_500Hz_5000_perm_RV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v15_500Hz_5000_perm_DV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
├── 20240608_74subj_v15_500Hz_5000_perm_MP_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
└── 20240608_74subj_v15_500Hz_5000_perm_RV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold

Data user agreement:

Dataset is distributed under the CC BY license.

Acknowledgements:

We sincerely thank all of volunteers who participated in this study.

Funding:

The study was funded within the framework of the state assignment of the Ministry of Education of the Russian Federation (N 073-00037-24-01).

References:

  1. Gutschalk, A., & Uppenkamp, S. (2011). Sustained responses for pitch and vowels map to similar sites in human auditory cortex. Neuroimage, 56(3), 1578-1587. doi:10.1016/j.neuroimage.2011.02.026

  2. Orekhova, E. V., Fadeev, K. A., Goiaeva, D. E., Obukhova, T. S., Ovsiannikova, T. M., Prokofyev, A. O., & Stroganova, T. A. (2023). Different Hemispheric Lateralization for Periodicity and Formant Structure of Vowels in the Auditory Cortex and Its Changes between Childhood and Adulthood. Cortex. doi:10.1016/j.cortex.2023.10.020

  3. Uppenkamp, S., Johnsrude, I. S., Norris, D., Marslen-Wilson, W., & Patterson, R. D. (2006). Locating the initial stages of speech-sound processing in human temporal cortex. Neuroimage, 31(3), 1284-1296. doi:10.1016/j.neuroimage.2006.01.004

  4. Fadeev, K. A., Goyaeva, D. E., Obukhova, T. S.,

Search
Clear search
Close search
Google apps
Main menu