CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Perception of vowel features (formant structure, pitch) in children with autism spectrum disorders and typically developing children (MEG/ERF study).
2017-2024.
This dataset was obtained by the team at the Center for Neurocognitive Research (MEG Center) of Moscow State University of Psychology and Education as part of a study on vowel perception and their properties in children (Fadeev et al., 2024, in press). It includes MEG recordings from 35 children with autism spectrum disorders and 39 typically developing children.
The participants were instructed to watch a silent video (movie/cartoon) of their choice and to ignore the auditory stimuli. The stimuli were delivered binaurally via plastic ear tubes inserted into the ear canals. The tubes were fixed to the MEG helmet to minimize possible noise from contact with the subject’s clothing. The intensity was set at a sound pressure level of 90 dB SPL. The experiment consisted of three blocks of 360 trials, each block lasting around 9 minutes with short breaks between blocks.
The experimental paradigm used in this study is identical to that described in Orekhova et al. (2023). We used four types of synthetic vowel-like stimuli previously employed by Uppenkamp et al. (2006, 2011) and downloaded from http://medi.uni-oldenburg.de/members/stefan/phonology_1/. You can also find a copy of the sound files in the stimuli
directory on this dataset page.. Five strong vowels were used:
- /a/ (caw, dawn),
- /e/ (ate, bait),
- /i/ (beat, peel),
- /o/ (coat, wrote) and
- /u/ (boot, pool).
A total of 270 stimuli from each of the four classes were presented, with three stimulus variants equally represented within each class (N = 90). All stimuli were presented in random order. Each stimulus lasted 812 ms, including rise/fall times of 10 ms each. The interstimulus intervals (ISI) were randomly chosen from a range of 500 to 800 ms.
Short names of stimuli (used in code, filenames, and directory names):
- dv
- periodic vowels
- rv
- non-periodic vowels
- mp
- periodic non-vowels
- mr
- non-periodic non-vowels
The following tests were also administered to the study participants: - Pure tone air conduction audiometry; - Russian Child Language Assessment Battery (Lopukhina et al., 2019); - Words-in-Noise (WiN) test (Fadeev et al., 2023); - Social Responsiveness Scale for children (Constantino, 2013); - Social Communication Questionnaire (SCQ-Lifetime) (Berument et al., 1999), - KABC-II, Mental Processing Index (MPI) as an IQ equivalent (Kaufman & Kaufman, 2004).
sub<label>/meg/...meg.fif
-- 3 runs (in some cases, the number of runs may differ due to the subjects' features). MEG data were recorded using Elekta VectorView Neuromag 306-channel MEG detector array (Helsinki, Finland) with 0.1 - 330 Hz inbuilt filters and 1000 Hz sample frequency.
sub<label>/anat/
-- T1-weighted images MRI for subject.
derivatives/freesurfer/
- outputs of running the FreeSurfer pipeline recon-all
on the MRI data with no additional command line options (only defaults were used):
$ recon-all -i sub-Z201_T1w.nii.gz -s Z201 -all
After the recon-all
call, there were further FreeSurfer calls from the MNE
API:
$ mne make_scalp_surfaces -s Z201 --force
$ mne watershed_bem -s Z201
The code of project has the following structure (directory names provide explanations of their contents):
code
├── analysis
│ ├── 0-preprocessing_for_clustering
│ │ ├── ...
│ ├── 1-tfce_clustering
│ │ ├── ...
│ ├── 2-clustering_results_analytics
│ │ ├── ...
│ └── modules
│ ├── clustering.py
│ └── data_ops.py
├── envs
│ ├── envs_for_between_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
│ ├── envs_for_interaction_clustering_in_auditory_cortex_with_morphological_sign_flip.json
│ └── envs_for_within_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
├── preprocessing
│ ├── 00-maxwell_filtering.py
│ ├── ...
│ └── 10-make_stc.py
├── README
├── requirements_for_ubuntu_2x.txt
└── requirements_for_windows_1x.txt
Please read code/README
file for more detail instructions.
For installation, we recommend the Anaconda distribution. Find the installation guide here: Anaconda Installation Guide.
After you have a working version of Python 3, simply install a new virtual environment via this command in the Ubuntu terminal or Anaconda Prompt for Windows OS:
conda create --name=mne1.4 --channel=conda-forge python=3.10 mne=1.4.1 numpy=1.23.1 spyder pyvista pyvistaqt pingouin rpy2 mne-bids numpy openpyxl autoreject
Or use the requirements_xxxxxx.txt files located in code
directory:
- For Ubuntu OS (version 20 and above), please use requirements_for_ubuntu_2x.txt.
- For Windows OS (version 10 and above), please use requirements_for_windows_1x.txt.
To do this, you can run the following command in the Ubuntu terminal or Anaconda Prompt for Windows OS:
conda create --name mne1.4 --file requirements_filename
Activate the created environment:
conda activate mne1.4
Start your favorite IDE with this environment. For example, to start Spyder IDE, use this command after activating the environment:
spyder
To start processing data, go to the directory with the downloaded data and open the Python scripts of interest. (If you are using Spyder IDE, use the "Files" pane located in the upper right corner of the workspace.)
All output files from the scripts of the code
directory are saved in the derivatives
directory.
Clustering Versions mnemonics (phrases used in python scripts names and directory names):
- v13
: TFCE clustering based on groups ⨯ conditions difference response
- v14
: TFCE clustering based on between-conditions difference response
- v15
: TFCE clustering based on between groups evoked response
The derivatives of project has the following structure (directory names provide explanations of their contents):
derivatives/preprocessing/
├── fsaverage_labels_of_analytics
│ ├── auditory_cortex_region-lh.label
│ └── auditory_cortex_region-rh.label
└── fsaverage_stcs_after_morph_flip_in_labels_of_analytics
├── subjects_info_for_morphological_sign_flipped_data_1000Hz
└── subjects_stc_for_morphological_sign_flipped_data_1000Hz
derivatives/analysis/
├── 20240607_74subj_v13_500Hz_5000_perm_DV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v13_500Hz_5000_perm_MP-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v13_500Hz_5000_perm_RV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v15_500Hz_5000_perm_DV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
├── 20240608_74subj_v15_500Hz_5000_perm_MP_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
└── 20240608_74subj_v15_500Hz_5000_perm_RV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
Dataset is distributed under the CC BY license.
We sincerely thank all of volunteers who participated in this study.
The study was funded within the framework of the state assignment of the Ministry of Education of the Russian Federation (N 073-00037-24-01).
Gutschalk, A., & Uppenkamp, S. (2011). Sustained responses for pitch and vowels map to similar sites in human auditory cortex. Neuroimage, 56(3), 1578-1587. doi:10.1016/j.neuroimage.2011.02.026
Orekhova, E. V., Fadeev, K. A., Goiaeva, D. E., Obukhova, T. S., Ovsiannikova, T. M., Prokofyev, A. O., & Stroganova, T. A. (2023). Different Hemispheric Lateralization for Periodicity and Formant Structure of Vowels in the Auditory Cortex and Its Changes between Childhood and Adulthood. Cortex. doi:10.1016/j.cortex.2023.10.020
Uppenkamp, S., Johnsrude, I. S., Norris, D., Marslen-Wilson, W., & Patterson, R. D. (2006). Locating the initial stages of speech-sound processing in human temporal cortex. Neuroimage, 31(3), 1284-1296. doi:10.1016/j.neuroimage.2006.01.004
Fadeev, K. A., Goyaeva, D. E., Obukhova, T. S.,
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundRecent advancements in generative artificial intelligence (AI) for image generation have presented significant opportunities for medical imaging, offering a promising way to generate realistic virtual medical images while ensuring patient privacy. The generation of a large number of virtual medical images through AI has the potential to augment training datasets for discriminative AI models, particularly in fields with limited data availability, such as neuroimaging. Current studies on generative AI in neuroimaging have mainly focused on disease discrimination; however, its potential for simulating complex phenomena in psychiatric disorders remains unknown. In this study, as examples of a simulation, we aimed to present a novel generative AI model that transforms magnetic resonance imaging (MRI) images of healthy individuals into images that resemble those of patients with schizophrenia (SZ) and explore its application.MethodsWe used anonymized public datasets from the Center for Biomedical Research Excellence (SZ, 71 patients; healthy subjects [HSs], 71 patients) and the Autism Brain Imaging Data Exchange (autism spectrum disorder [ASD], 79 subjects; HSs, 105 subjects). We developed a model to transform MRI images of HSs into MRI images of SZ using cycle generative adversarial networks. The efficacy of the transformation was evaluated using voxel-based morphometry to assess the differences in brain region volumes and the accuracy of age prediction pre- and post-transformation. In addition, the model was examined for its applicability in simulating disease comorbidities and disease progression.ResultsThe model successfully transformed HS images into SZ images and identified brain volume changes consistent with existing case-control studies. We also applied this model to ASD MRI images, where simulations comparing SZ with and without ASD backgrounds highlighted the differences in brain structures due to comorbidities. Furthermore, simulating disease progression while preserving individual characteristics showcased the model’s ability to reflect realistic disease trajectories.DiscussionThe results suggest that our generative AI model can capture subtle changes in brain structures associated with SZ, providing a novel tool for visualizing brain changes in different diseases. The potential of this model extends beyond clinical diagnosis to advances in the simulation of disease mechanisms, which may ultimately contribute to the refinement of therapeutic strategies.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A richly phenotyped transdiagnostic dataset with behavioral and Magnetic Resonance Imaging (MRI) data from 241 individuals aged 18 to 70, comprising 148 individuals meeting diagnostic criteria for a broad range of psychiatric illnesses and a healthy comparison group of 93 individuals.
These data include high-resolution anatomical scans and 6 x resting-state, and 3 x task-based (2 x Stroop, 1 x Faces/Shapes) functional MRI runs. Participants completed over 50 psychological and cognitive questionnaires, as well as a semi-structured clinical interview.
Data was collected at the Brain Imaging Center, Yale University, New Haven, CT and McLean Hospital, Belmont, MA. This dataset will allow investigation into brain function and transdiagnostic psychopathology in a community sample.
Participants in the study met the following inclusion criteria:
Participants meeting any of the criteria listed below were excluded from the study: * Neurological disorders * Pervasive developmental disorders (e.g., autism spectrum disorder) * Any medical condition that increases risk for MRI (e.g., pacemaker, dental braces) * MRI contraindications (e.g., claustrophobia pregnancy)
Institutional Review Board approval and consent were obtained. To characterise the sample, we collected data on race/ethnicity, income, use of psychotropic medication, and family history of medical or psychiatric conditions.
Relevant clinical measures can be found in the phenotype
folder, with each measure and its items described in the relevant _definition
.csv file. The 'qc' columns indicate quality control checks done on each members (i.e., number of unanswered items by a participant.) '999' values indicate missing or skipped data.
Detailed information and imaging protocols regarding the dataset can be found here: [Add preprint Link]
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Perception of vowel features (formant structure, pitch) in children with autism spectrum disorders and typically developing children (MEG/ERF study).
2017-2024.
This dataset was obtained by the team at the Center for Neurocognitive Research (MEG Center) of Moscow State University of Psychology and Education as part of a study on vowel perception and their properties in children (Fadeev et al., 2024, in press). It includes MEG recordings from 35 children with autism spectrum disorders and 39 typically developing children.
The participants were instructed to watch a silent video (movie/cartoon) of their choice and to ignore the auditory stimuli. The stimuli were delivered binaurally via plastic ear tubes inserted into the ear canals. The tubes were fixed to the MEG helmet to minimize possible noise from contact with the subject’s clothing. The intensity was set at a sound pressure level of 90 dB SPL. The experiment consisted of three blocks of 360 trials, each block lasting around 9 minutes with short breaks between blocks.
The experimental paradigm used in this study is identical to that described in Orekhova et al. (2023). We used four types of synthetic vowel-like stimuli previously employed by Uppenkamp et al. (2006, 2011) and downloaded from http://medi.uni-oldenburg.de/members/stefan/phonology_1/. You can also find a copy of the sound files in the stimuli
directory on this dataset page.. Five strong vowels were used:
- /a/ (caw, dawn),
- /e/ (ate, bait),
- /i/ (beat, peel),
- /o/ (coat, wrote) and
- /u/ (boot, pool).
A total of 270 stimuli from each of the four classes were presented, with three stimulus variants equally represented within each class (N = 90). All stimuli were presented in random order. Each stimulus lasted 812 ms, including rise/fall times of 10 ms each. The interstimulus intervals (ISI) were randomly chosen from a range of 500 to 800 ms.
Short names of stimuli (used in code, filenames, and directory names):
- dv
- periodic vowels
- rv
- non-periodic vowels
- mp
- periodic non-vowels
- mr
- non-periodic non-vowels
The following tests were also administered to the study participants: - Pure tone air conduction audiometry; - Russian Child Language Assessment Battery (Lopukhina et al., 2019); - Words-in-Noise (WiN) test (Fadeev et al., 2023); - Social Responsiveness Scale for children (Constantino, 2013); - Social Communication Questionnaire (SCQ-Lifetime) (Berument et al., 1999), - KABC-II, Mental Processing Index (MPI) as an IQ equivalent (Kaufman & Kaufman, 2004).
sub<label>/meg/...meg.fif
-- 3 runs (in some cases, the number of runs may differ due to the subjects' features). MEG data were recorded using Elekta VectorView Neuromag 306-channel MEG detector array (Helsinki, Finland) with 0.1 - 330 Hz inbuilt filters and 1000 Hz sample frequency.
sub<label>/anat/
-- T1-weighted images MRI for subject.
derivatives/freesurfer/
- outputs of running the FreeSurfer pipeline recon-all
on the MRI data with no additional command line options (only defaults were used):
$ recon-all -i sub-Z201_T1w.nii.gz -s Z201 -all
After the recon-all
call, there were further FreeSurfer calls from the MNE
API:
$ mne make_scalp_surfaces -s Z201 --force
$ mne watershed_bem -s Z201
The code of project has the following structure (directory names provide explanations of their contents):
code
├── analysis
│ ├── 0-preprocessing_for_clustering
│ │ ├── ...
│ ├── 1-tfce_clustering
│ │ ├── ...
│ ├── 2-clustering_results_analytics
│ │ ├── ...
│ └── modules
│ ├── clustering.py
│ └── data_ops.py
├── envs
│ ├── envs_for_between_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
│ ├── envs_for_interaction_clustering_in_auditory_cortex_with_morphological_sign_flip.json
│ └── envs_for_within_groups_clustering_in_auditory_cortex_with_morphological_sign_flip.json
├── preprocessing
│ ├── 00-maxwell_filtering.py
│ ├── ...
│ └── 10-make_stc.py
├── README
├── requirements_for_ubuntu_2x.txt
└── requirements_for_windows_1x.txt
Please read code/README
file for more detail instructions.
For installation, we recommend the Anaconda distribution. Find the installation guide here: Anaconda Installation Guide.
After you have a working version of Python 3, simply install a new virtual environment via this command in the Ubuntu terminal or Anaconda Prompt for Windows OS:
conda create --name=mne1.4 --channel=conda-forge python=3.10 mne=1.4.1 numpy=1.23.1 spyder pyvista pyvistaqt pingouin rpy2 mne-bids numpy openpyxl autoreject
Or use the requirements_xxxxxx.txt files located in code
directory:
- For Ubuntu OS (version 20 and above), please use requirements_for_ubuntu_2x.txt.
- For Windows OS (version 10 and above), please use requirements_for_windows_1x.txt.
To do this, you can run the following command in the Ubuntu terminal or Anaconda Prompt for Windows OS:
conda create --name mne1.4 --file requirements_filename
Activate the created environment:
conda activate mne1.4
Start your favorite IDE with this environment. For example, to start Spyder IDE, use this command after activating the environment:
spyder
To start processing data, go to the directory with the downloaded data and open the Python scripts of interest. (If you are using Spyder IDE, use the "Files" pane located in the upper right corner of the workspace.)
All output files from the scripts of the code
directory are saved in the derivatives
directory.
Clustering Versions mnemonics (phrases used in python scripts names and directory names):
- v13
: TFCE clustering based on groups ⨯ conditions difference response
- v14
: TFCE clustering based on between-conditions difference response
- v15
: TFCE clustering based on between groups evoked response
The derivatives of project has the following structure (directory names provide explanations of their contents):
derivatives/preprocessing/
├── fsaverage_labels_of_analytics
│ ├── auditory_cortex_region-lh.label
│ └── auditory_cortex_region-rh.label
└── fsaverage_stcs_after_morph_flip_in_labels_of_analytics
├── subjects_info_for_morphological_sign_flipped_data_1000Hz
└── subjects_stc_for_morphological_sign_flipped_data_1000Hz
derivatives/analysis/
├── 20240607_74subj_v13_500Hz_5000_perm_DV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v13_500Hz_5000_perm_MP-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v13_500Hz_5000_perm_RV-MR_TD_vs_ASD_0-800_msec_tfce_interaction_in_auditory_cortex_morph_flip_with_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_ASD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_DV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_MP-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v14_500Hz_5000perm_TD_RV-MR_0-800_msec_tfce_1samp_within_groups_in_auditory_cortex_morph_flip_5e-02_clusters_p_thresh
├── 20240608_74subj_v15_500Hz_5000_perm_DV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
├── 20240608_74subj_v15_500Hz_5000_perm_MP_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
└── 20240608_74subj_v15_500Hz_5000_perm_RV_TD_vs_ASD_in_0-800_msec_tfce_between_groups_in_auditory_cortex_morph_flip_with_5e-02_cluster_p_threshold
Dataset is distributed under the CC BY license.
We sincerely thank all of volunteers who participated in this study.
The study was funded within the framework of the state assignment of the Ministry of Education of the Russian Federation (N 073-00037-24-01).
Gutschalk, A., & Uppenkamp, S. (2011). Sustained responses for pitch and vowels map to similar sites in human auditory cortex. Neuroimage, 56(3), 1578-1587. doi:10.1016/j.neuroimage.2011.02.026
Orekhova, E. V., Fadeev, K. A., Goiaeva, D. E., Obukhova, T. S., Ovsiannikova, T. M., Prokofyev, A. O., & Stroganova, T. A. (2023). Different Hemispheric Lateralization for Periodicity and Formant Structure of Vowels in the Auditory Cortex and Its Changes between Childhood and Adulthood. Cortex. doi:10.1016/j.cortex.2023.10.020
Uppenkamp, S., Johnsrude, I. S., Norris, D., Marslen-Wilson, W., & Patterson, R. D. (2006). Locating the initial stages of speech-sound processing in human temporal cortex. Neuroimage, 31(3), 1284-1296. doi:10.1016/j.neuroimage.2006.01.004
Fadeev, K. A., Goyaeva, D. E., Obukhova, T. S.,