Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive contains a raw DICOM dataset acquired (with informed consent) using the ReproIn naming convention on a Siemens Skyra 3T MRI scanner. The dataset includes a T1-weighted anatomical image, four functional runs with the “prettymouth” spoken story stimulus, and one functional run with a block design emotional faces task, as well as auxiliary scans (e.g., scout, soundcheck). The “prettymouth” story stimulus created by Yeshurun et al., 2017 and is available as part of the Narratives collection, and the emotional faces task is similar to Chai et al., 2015. These data are intended for use with the Princeton Handbook for Reproducible Neuroimaging. The handbook provides guidelines for BIDS conversion and execution of BIDS apps (e.g., fMRIPrep, MRIQC). The brain data are contributed by author S.A.N. and are authorized for non-anonymized distribution.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Modality-agnostic files were copied over and the CHANGES
file was updated. Data was aggregated using:
python phenotype.py aggregate subject -i segregated_subject -o aggregated_subject
phenotype.py
came from the GitHub repository: https://github.com/ericearl/bids-phenotype
A comprehensive clinical, MRI, and MEG collection characterizing healthy research volunteers collected at the National Institute of Mental Health (NIMH) Intramural Research Program (IRP) in Bethesda, Maryland using medical and mental health assessments, diagnostic and dimensional measures of mental health, cognitive and neuropsychological functioning, structural and functional magnetic resonance imaging (MRI), along with diffusion tensor imaging (DTI), and a comprehensive magnetoencephalography battery (MEG).
In addition, blood samples are currently banked for future genetic analysis. All data collected in this protocol are broadly shared in the OpenNeuro repository, in the Brain Imaging Data Structure (BIDS) format. In addition, blood samples of healthy volunteers are banked for future analyses. All data collected in this protocol are broadly shared here, in the Brain Imaging Data Structure (BIDS) format. In addition, task paradigms and basic pre-processing scripts are shared on GitHub. This dataset is unique in its depth of characterization of a healthy population in terms of brain health and will contribute to a wide array of secondary investigations of non-clinical and clinical research questions.
This dataset is licensed under the Creative Commons Zero (CC0) v1.0 License.
Inclusion criteria for the study require that participants are adults at or over 18 years of age in good health with the ability to read, speak, understand, and provide consent in English. All participants provided electronic informed consent for online screening and written informed consent for all other procedures. Exclusion criteria include:
Study participants are recruited through direct mailings, bulletin boards and listservs, outreach exhibits, print advertisements, and electronic media.
All potential volunteers first visit the study website (https://nimhresearchvolunteer.ctss.nih.gov), check a box indicating consent, and complete preliminary self-report screening questionnaires. The study website is HIPAA compliant and therefore does not collect PII ; instead, participants are instructed to contact the study team to provide their identity and contact information. The questionnaires include demographics, clinical history including medications, disability status (WHODAS 2.0), mental health symptoms (modified DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure), substance use survey (DSM-5 Level 2), alcohol use (AUDIT), handedness (Edinburgh Handedness Inventory), and perceived health ratings. At the conclusion of the questionnaires, participants are again prompted to send an email to the study team. Survey results, supplemented by NIH medical records review (if present), are reviewed by the study team, who determine if the participant is likely eligible for the protocol. These participants are then scheduled for an in-person assessment. Follow-up phone screenings were also used to determine if participants were eligible for in-person screening.
At this visit, participants undergo a comprehensive clinical evaluation to determine final eligibility to be included as a healthy research volunteer. The mental health evaluation consists of a psychiatric diagnostic interview (Structured Clinical Interview for DSM-5 Disorders (SCID-5), along with self-report surveys of mood (Beck Depression Inventory-II (BD-II) and anxiety (Beck Anxiety Inventory, BAI) symptoms. An intelligence quotient (IQ) estimation is determined with the Kaufman Brief Intelligence Test, Second Edition (KBIT-2). The KBIT-2 is a brief (20-30 minute) assessment of intellectual functioning administered by a trained examiner. There are three subtests, including verbal knowledge, riddles, and matrices.
Medical evaluation includes medical history elicitation and systematic review of systems. Biological and physiological measures include vital signs (blood pressure, pulse), as well as weight, height, and BMI. Blood and urine samples are taken and a complete blood count, acute care panel, hepatic panel, thyroid stimulating hormone, viral markers (HCV, HBV, HIV), C-reactive protein, creatine kinase, urine drug screen and urine pregnancy tests are performed. In addition, blood samples that can be used for future genomic analysis, development of lymphoblastic cell lines or other biomarker measures are collected and banked with the NIMH Repository and Genomics Resource (Infinity BiologiX). The Family Interview for Genetic Studies (FIGS) was later added to the assessment in order to provide better pedigree information; the Adverse Childhood Events (ACEs) survey was also added to better characterize potential risk factors for psychopathology. The entirety of the in-person assessment not only collects information relevant for eligibility determination, but it also provides a comprehensive set of standardized clinical measures of volunteer health that can be used for secondary research.
Participants are given the option to consent for a magnetic resonance imaging (MRI) scan, which can serve as a baseline clinical scan to determine normative brain structure, and also as a research scan with the addition of functional sequences (resting state and diffusion tensor imaging). The MR protocol used was initially based on the ADNI-3 basic protocol, but was later modified to include portions of the ABCD protocol in the following manner:
At the time of the MRI scan, volunteers are administered a subset of tasks from the NIH Toolbox Cognition Battery. The four tasks include:
An optional MEG study was added to the protocol approximately one year after the study was initiated, thus there are relatively fewer MEG recordings in comparison to the MRI dataset. MEG studies are performed on a 275 channel CTF MEG system (CTF MEG, Coquiltam BC, Canada). The position of the head was localized at the beginning and end of each recording using three fiducial coils. These coils were placed 1.5 cm above the nasion, and at each ear, 1.5 cm from the tragus on a line between the tragus and the outer canthus of the eye. For 48 participants (as of 2/1/2022), photographs were taken of the three coils and used to mark the points on the T1 weighted structural MRI scan for co-registration. For the remainder of the participants (n=16 as of 2/1/2022), a Brainsight neuronavigation system (Rogue Research, Montréal, Québec, Canada) was used to coregister the MRI and fiducial localizer coils in realtime prior to MEG data acquisition.
Online and In-person behavioral and clinical measures, along with the corresponding phenotype file name, sorted first by measurement location and then by file name.
Location | Measure | File Name |
---|---|---|
Online | Alcohol Use Disorders Identification Test (AUDIT) | audit |
Demographics | demographics | |
DSM-5 Level 2 Substance Use - Adult | drug_use | |
Edinburgh Handedness Inventory (EHI) | ehi | |
Health History Form | health_history_questions | |
Perceived Health Rating - self | health_rating | |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This folder contains data from a fictional participant that you can use to test BIDS Manager (https://github.com/Dynamap/BIDS_Manager).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This folder contains data organised in BIDS format to test BIDS Manager-Pipeline (https://github.com/Dynamap/BIDS_Manager/tree/dev).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Material related to the blog that reports on the parrot LUT
The blog was published at the Node: http://thenode.biologists.com/parrot-lut/research/
-Source
The ‘morgenstemning’ LUT was originally described in:
M. Geissbuehler and T. Lasser - "How to display data by color schemes compatible with red-green color perception deficiencies”, Optics Express, 2013
The ‘inferno’ LUT was originally created by Stéfan van der Walt and Nathaniel Smith (http://bids.github.io/colormap/).
The ‘pseudocolorMM’ LUT was derived from MetaMorph software (version 7.6).
The ‘royal’ and ‘Fire’ LUT are available in ImageJ (version 1.49j)
The ‘parrot’ LUT was designed by Joachim Goedhart and first described here: http://thenode.biologists.com/parrot-lut/research/
-Distribution
The colormaps Magma, Inferno, Plasma and Viridis are available under a CC0 "no rights reserved" license (https://creativecommons.org/share-your-work/public-domain/cc0) and are present in FIJI.
The colormaps Mongenstemning & Parrot are free software: you can redistribute them and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
These colormaps are distributed in the hope that they will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details: .
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains the MNE-somato-data in BIDS format.
The conversion can be reproduced through the Python script stored in the
/code
directory of this dataset. See the README in that directory.
The /derivatives
directory contains the outputs of running the FreeSurfer
pipeline recon-all
on the MRI data with no additional commandline options
(only defaults were used):
$ recon-all -i sub-01_T1w.nii.gz -s 01 -all
After the recon-all
call, there were further FreeSurfer calls from the MNE
API:
$ mne make_scalp_surfaces -s 01 --force $ mne watershed_bem -s 01
The derivatives also contain the forward model *-fwd.fif
, which was produced
using the source space definition, a *-trans.fif
file, and the boundary
element model (=conductor model) that lives in
freesurfer/subjects/01/bem/*-bem-sol.fif
.
The *-trans.fif
file is not saved, but can be recovered from the anatomical
landmarks in the sub-01/anat/T1w.json
file and MNE-BIDS' function
get_head_mri_transform
.
See: https://github.com/mne-tools/mne-bids for more information.
the FreeSurfer pipeline recon-all
was run new for the sake of converting the
somato data to BIDS format. This needed to be done to change the "somato"
subject name to the BIDS subject label "01". Note, that this is NOT "sub-01",
because in BIDS, the "sub-" is just a prefix, whereas the "01" is the subject
label.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive contains sample output files for the sample data accompanying the Princeton Handbook for Reproducible Neuroimaging. Outputs include the NIfTI images converted using HeuDiConv (v0.5.dev1) and organized according to the BIDS standard, quality control evaluation using MRIQC (v0.10.4), data preprocessed using fMRIPrep (v1.4.1rc1), and other auxiliary files. All outputs were created according to the procedures outlined in the handbook, and are intended to serve as a didactic reference for use with the handbook. The sample data from which the outputs are derived were acquired (with informed consent) using the ReproIn naming convention on a Siemens Skyra 3T MRI scanner. The sample data include a T1-weighted anatomical image, four functional runs with the “prettymouth” spoken story stimulus, and one functional run with a block design emotional faces task, as well as auxiliary scans (e.g., scout, soundcheck). The “prettymouth” story stimulus created by Yeshurun et al., 2017 and is available as part of the Narratives collection, and the emotional faces task is similar to Chai et al., 2015. The brain data are contributed by author S.A.N. and are authorized for non-anonymized distribution.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Dataset description This dataset is part of a bigger dataset of intracranial EEG (iEEG) called RESPect (Registry for Epilepsy Surgery Patients), a dataset recorded at the University Medical Center of Utrecht, the Netherlands. This dataset consists of 13 patients with long-term recordings (5 patients recorded with electrocorticography and 8 patients recorded with stereo-encephalography. For a detailed description see Jelsma S.B. et al 2024, Structural and effective brain connectivity in focal epilepsy.
This data is organized according to the Brain Imaging Data Structure specification: A community-driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see https://bids-specification.readthedocs.io/en/stable/
Each patient has their own folder (e.g., sub-STREEF01
) which contains the iEEG recordings of that patient, as well as the metadata to understand the raw data and event timing.
In long-term recordings, data that are recorded within one monitoring period are logically grouped in the same BIDS session and stored across runs indicating the day and time point of recording in the monitoring period. We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient´s state during the recording of this file. A specific task called “SPESclin“ is defined when the clinical SPES protocol has been performed.
License This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at https://creativecommons.org/publicdomain/zero/1.0/. We hope that all users will follow the ODC Attribution/Share-Alike Community Norms (http://www.opendatacommons.org/norms/odc-by-sa/). In particular, while not legally required, we hope that all users of the data will acknowledge by citing: 1. Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A practical workflow for organizing clinical intraoperative and long-term iEEG data in BIDS“, published in NeuroInformatics in 2022 2. Jelsma S.B. et al 2024, Structural and effective brain connectivity in focal epilepsy in any publications.
Code available at: https://github.com/UMCU-EpiLAB/umcuEpi_CCEP_DTI.
Acknowledgements We thank the SEIN-UMCU RESPect database group (C.J.J. van Asch, L. van de Berg, S. Blok, M.D. Bourez, K.P.J. Braun, J.W. Dankbaar, C.H. Ferrier, T.A. Gebbink, P.H. Gosselaar, R. van Griethuysen, M.G.G. Hobbelink, F.W.A. Hoefnagels, N.E.C. van Klink, M.A. van ‘t Klooster, G.A.P. deKort, M.H.M. Mantione, A. Muhlebner, J.M. Ophorst, P.C. van Rijen, S.M.A. van der Salm, E.V. Schaft, M.M.J. van Schooneveld, H. Smeding, D. Sun, A. Velders, M.J.E. van Zandvoort, G.J.M. Zijlmans, E. Zuidhoek and J. Zwemmer) for their contributions and help in collecting the data.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The raw BIDS data was created using BIDScoin 3.6.3 All provenance information and settings can be found in ./code/bidscoin For more information see: https://github.com/Donders-Institute/bidscoin
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data collection from Yale University's Fundamentals of the Adolescent Brain (FAB) Lab contains an accelerated adult equivalent of the ABCD Study(R) neuroimaging dataset. The Brain Imaging Data Structure (BIDS) directory includes 5 sessions for each of 7 participants, spaced approximately 1 week apart.
BIDS input data were converted from DICOMs using Dcm2Bids (https://github.com/cbedetti/Dcm2Bids). BIDS derivatives data were derived from the OHSU DCAN Labs ABCD-BIDS MRI processing pipeline which outputs Human Connectome Project (HCP) Minimal Preprocessing Pipelines-style data in both volume and surface spaces (https://doi.org/10.5281/zenodo.2587210, https://doi.org/10.1016/j.neuroimage.2013.04.127). This collection is independent from ABCD Data Collection 2573.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the EEG recordings of 30 participants in a study conducted by the IT University of Copenhagen brAIn lab, designed to investigate the origins of the Uncanny Valley phenomenon. The study is a follow-up to our pilot study on the Uncanny Valley, also available on Zenodo at https://zenodo.org/records/7948158.
The dataset contains the images that have been shown to the participants, the events, and all the details about the timing and the EEG data. The structure of the dataset follows the Brain Imaging Data Structure specification.
The dataset can be analysed using the scripts available at https://github.com/itubrainlab/uncanny-valley-eeg-study-full-analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The UCLH Stroke EIT Dataset - Radiology Reports
Each folder contains the anonymised radiology data, and clinical reports for all patients in the study. The latest version follows the BIDS structure
Full details on the use of these files are given the repository https://github.com/EIT-team/Stroke_EIT_Dataset
Version 4 - Latest BIDS version. Use this version for NIFTI files
Version 3 - Initial BIDS version
Version 2 - Updated DICOM. Use this version if you wish to use the original DICOM files
Version 1 - Initial upload
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Combined PET MR longitudinal study
Participants with PET (>=18yo at visit 1) have enter the scanner twice on the same day (collapsed within the same BIDS session): * PET:raclopride + MR:functional (tw1 x 2, task x 6, rest x 2, mt x 4) * PET:dtbz + MR:structural (t1w, dsi, r2prime, rest x 1)
MR only participants (<18yo at visit 1) enter the scanner only once on a given day. Task x 6, rest x 2, mt, r2prime, and DWI are collected.
At least 2 fieldmaps are collected, before task and before rest. One fieldmap is collected in PET:dtbz session before rest.
In any session, there will be at most 2 T1w acquisitions. One slow and the other accelerated (G2). For PET participants, there could be up to T1w 4 runs. At return visits, only G2 is collected for MR-only participants.
BIDS populated with https://github.com/LabNeuroCogDevel/mMR_PETDA/tree/master/bids (/Volumes/Phillips/mMR_PETDA/scripts/bids/)
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the data repository for the BOLD Moments Dataset. This dataset contains brain responses to 1,102 3-second videos across 10 subjects. Each subject saw the 1,000 video training set 3 times and the 102 video testing set 10 times. Each video is additionally human-annotated with 15 object labels, 5 scene labels, 5 action labels, 5 sentence text descriptions, 1 spoken transcription, 1 memorability score, and 1 memorability decay rate.
Overview of contents:
The home folder (everything except the derivatives/ folder) contains the raw data in BIDS format before any preprocessing. Download this folder if you want to run your own preprocessing pipeline (e.g., fMRIPrep, HCP pipeline).
To comply with licensing requirements, the stimulus set is not available here on OpenNeuro (hence the invalid BIDS validation). See the GitHub repository (https://github.com/blahner/BOLDMomentsDataset) to download the stimulus set and stimulus set derivatives (like frames). To make this dataset perfectly BIDS compliant for use with other BIDS-apps, you may need to copy the 'stimuli' folder from the downloaded stimulus set into the parent directory.
The derivatives folder contains all data derivatives, including the stimulus annotations (./derivatives/stimuli_metadata/annotations.json), model weight checkpoints for a TSM ResNet50 model trained on a subset of Multi-Moments in Time, and prepared beta estimates from two different fMRIPrep preprocessing pipelines (./derivatives/versionA and ./derivatives/versionB).
VersionA was used in the main manuscript, and versionB is detailed in the manuscript's supplementary. If you are starting a new project, we highly recommend you use the prepared data in ./derivatives/versionB/ because of its better registration, use of GLMsingle, and availability in more standard/non-standard output spaces. Code used in the manuscript is located at the derivatives version level. For example, the code used in the main manuscript is located under ./derivatives/versionA/scripts. Note that versionA prepared data is very large due to beta estimates for 9 TRs per video. See this GitHub repo for starter code demonstrating basic usage and dataset download scripts: https://github.com/blahner/BOLDMomentsDataset. See this GitHub repo for the TSM ResNet50 model training and inference code: https://github.com/pbw-Berwin/M4-pretrained
Data collection notes: All data collection notes explained below are detailed here for the purpose of full transparency and should be of no concern to researchers using the data i.e. these inconsistencies have been attended to and integrated into the BIDS format as if these exceptions did not occur. The correct pairings between field maps and functional runs are detailed in the .json sidecars accompanying each field map scan.
Subject 2: Session 1: Subject repositioned head for comfort after the third resting state scan, approximately 1 hour into the session. New scout and field map scans were taken. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Session 4: Completed over two separate days due to subject feeling sleepy. All 3 testing runs and 6/10 training runs were completed on the first day, and the last 4 training runs were completed on the second day. Each of the two days for session 4 had its own field map. This did not interfere with session 5. All scans across both days belonging to session 4 were analyzed as if they were collected on the same day. In the case of applying a susceptibility distortion correction analysis, session 4 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Subject 4: Sessions 1 and 2: The fifth (out of 5) localizer run from session 1 was completed at the end of session 2 due to a technical error. This localizer run therefore used the field map from session 2. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Subject 10: Session 5: Subject moved a lot to readjust earplug after the third functional run (1 test and 2 training runs completed). New field map scans were collected. In the case of applying a susceptibility distortion correction analysis, session 5 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.
Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
This dataset will form the basis of a forthcoming publication regarding streamlining of the quantitative BOLD (qBOLD) approach to measuring brain oxygenation. In the absence of a reference to this publication the methods used are outlined here. Please reference this dataset if you use it in your work.
Stone AJ, Blockley NP. Data acquired to demonstrate a streamlined approach to mapping and quantifying brain oxygenation using quantitative BOLD. Oxford University Research Archive 2016. doi:
Summary
This dataset was acquired during the development of a streamlined qBOLD technique for making measurements of brain oxygenation. The aim here was to see whether confounding partial volume effects of multiple tissue types could be removed using an inversion recovery preparation. Inversion times were optimised to null cerebrospinal fluid (CSF), grey matter (GM) or white matter (WM) at the time of image acquisition [1]. Images were acquired using an Asymmetric Spin Echo (ASE) pulse sequence to introduce varying amount of R2′ weighting to images [2]. R2′ (R-2-prime) is the reversible relaxation rate, a component of transverse signal decay and the reciprocal of T2′ (T-2-prime). Gradient Echo Slice Excitation Profile Imaging (GESEPI) was incorporated into the ASE acquisition to minimise the effect of through-slice magnetic field gradients which would otherwise artificially elevate R2′ [3].
MRI data
Images were acquired using a Siemens Magnetom Verio scanner at 3T. The body coil was used for transmission and the manufacturer's 32-channel head coil was used for reception. GESEPI ASE (GASE) data were acquired with a field of view of 240x240 mm2, a 64x64 matrix, ten 5mm slices, TR/TE=3s/74ms and an EPI bandwidth of 2004Hx/px. ASE images are acquired with varying amount of R2′ weighting determined by the spin echo displacement time, tau, i.e. S = S0 exp(-tau R2′) exp(-TE R2). Twenty four values of tau were acquired for each GASE scan: -28, -24, -20, -16, -12, -8, -4, 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60 and 64ms. The GESEPI magnetic field gradient correction technique required each 5mm slice to be encoded into multiple thin partitions each 1.25mm thick. Furthermore, partitions were oversampled by 100% leading to the acquisition of 8 partitions per slice. Oversampled slices were discarded during reconstruction, resulting in 40 slices being acquired for each tau value. To regain signal to noise ratio we suggest summing the slices in blocks of four, therefore resulting in the original ten prescribed slices. A slice selective inversion recovery preparation was used to null the signal of a target tissue compartment. The appropriate inversion time for each compartment was optimised based on literature values for CSF, GM and WM [4], to give values of 1.21s, 0.702s and 0.511s, respectively. In addition, one dataset was acquired without an inversion recovery preparation with the same range of tau values and one dataset was acquired with an expanded foot-head coverage for only tau=0ms - the spin echo - to help with registration. High resolution T1 weighted anatomical images were also acquired for registration and the generation of tissue specific masks. Anatomicals are "defaced" using the shell script in the code directory [5].
Data curation
The structure in which this data has been placed is based on the Brain Imaging Data Structure (BIDS) format [6]. However, this format (BIDS version 1.0.0-rc2) does not support ASE data, but we have followed the guiding principles of this specification.
References
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This repository contains raw MRI data of 127 subjects with varying language backgrounds and proficiencies. Below is a detailed outline of the file structure used:
Each of these directories contain the BIDS formatted anatomical and functional MRI data, with the name of the directory corresponding to the subject's unique identifier.
For more information on the subdirectories, see BIDS information at https://bids-specification.readthedocs.io/en/stable/appendices/entity-table.html
This directory contains outputs of common processing pipelines run on the raw MRI data from "data/sub-EBE****".
These are the results of the CAT12 toolbox, which stands for Computational Anatomy Toolbox, and is used to calculate brain region volumes using voxel-based morphometry (VBM). A few things are required to download for this process.
CONN is used to generate data on functional connectivity from brain fMRI sequences. A few things are required to download for this process.
We used FMRIB's Diffusion Toolbox (FDT) for extracting values from diffusion weighted images. To use FDT, you need to download the following modules through CLI:
For more information on the toolbox, visit https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/index.
fMRIprep is the preprocessing of task-based and resting-state functional MRI. We use it to generate data for connectivity.
We used fMRIprep v23.0.2. For more information, visit https://fmriprep.org/en/stable/index.html.
FreeSurfer is a software package for the analysis and visualization of structural and functional neuroimaging data, which we use to extract region volumes through surface-based morphometry (SBM).
We used freesurfer v7.4.1. For more information, visit https://surfer.nmr.mgh.harvard.edu/fswiki.
This directory contains data and code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted).
This directory contains python and R code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted), with each python notebook corresponding to a different part of the paper's analysis. For more details on each file and subdirectories, see "analysis/code/README.md".
This directory contains language data on each subject, including a composite multilingualism score from Chen & Blanco-Elorrieta (submitted), information on language knowledge, exposure, mixing, use in education, and family members’ language ability in the participants’ known languages from early childhood to the present day. For more information on the files and their fields, see "analysis/participant_data/metadata.xlsx".
This directory contains MRI data, both anatomical and functional, that is the final result of processing raw MRI data. This includes brain volumes, cortical thickness, fractional anisotropy values, and connectivity measures. For more information on the files within this directory, see "analysis/processed_mri_data/metadata.xlsx".
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
README
This data set consists of two sessions of resting-state spinal cord fMRI data from 48 healthy participants. This data set was collected as a part of a larger methodological project (see https://doi.org/10.1002/hbm.26018). Data from these 48 participants have already been shared (https://openneuro.org/datasets/ds004068/versions/1.0.3), but here we provide previously unavailable resting-state fMRI data associated with the following manuscript: PREPRINT LINK. For each participant, we share i) a T2-weighted anatomical image, ii) two resting-state fMRI acquisitions of 250 volumes each (acquired with manual and automated slice-specific z-shimming), and iii) associated peripheral physiological data (ECG and respiratory recordings). For a detailed description, please see:
PREPRINT LINK
Should you make use of this data set in any publication, please cite the following article:
PREPRINT LINK
This data set is made available under the Creative Commons CC0 license. For more information, see https://creativecommons.org/share-your-work/public-domain/cc0/
This data set is organized according to the Brain Imaging Data Structure (BIDS) specification. For more information on BIDS, see https://bids-specification.readthedocs.io/en/stable/ Each participant’s data are in one subdirectory (e.g., sub-ZS001), which contains the raw NIfTI data (after DICOM to NIfTI conversion) for this particular participant, as well as the associated metadata. Raw and processed peripheral physiological data can be found in each participant’s subdirectory under the “derivatives” folder. Manually obtained or adjusted MRI-based derivatives (e.g., spinal cord masks, segmental labels) are also shared for each participant and can be found in each participant’s subdirectory of the “derivatives” folder. For more details about the preprocessing pipeline and the description of each derivative, please see the following links: https://github.com/eippertlab/restingstate-reliability-spinalcord/ and PREPRINT LINK. Please note that data from three participants (sub-ZS009, sub-ZS018, sub-ZS030) are excluded from all analyses due to technical errors in the acquisition of peripheral physiological data, but their datasets are still provided for the sake of completeness. Should you have any questions about this data set, please contact mkaptan@stanford.edu or eippert@cbs.mpg.de.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A free form text ( README ) describing the dataset in more details that SHOULD be provided
The raw BIDS data was created using BIDScoin 3.0.8 All provenance information and settings can be found in ./code/bidscoin For more information see: https://github.com/Donders-Institute/bidscoin
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
We present a multimodal dataset of intracranial recordings, fMRI, and eye tracking in 20 participants during movie watching. Recordings consist of single neurons, local field potential, and intracranial EEG activity acquired from depth electrodes targeting the amygdala, hippocampus, and medial frontal cortex implanted for monitoring of epileptic seizures. Participants watched an 8-min long excerpt from the video "Bang! You're Dead" and performed a recognition memory test for movie content. 3 T fMRI activity was recorded prior to surgery in 11 of these participants while performing the same task. This NWB- and BIDS-formatted dataset includes spike times, field potential activity, behavior, eye tracking, electrode locations, demographics, and functional and structural MRI scans. For technical validation, we provide signal quality metrics, assess eye tracking quality, behavior, the tuning of cells and high-frequency broadband power field potentials to familiarity and event boundaries, and show brain-wide inter-subject correlations for fMRI. This dataset will facilitate the investigation of brain activity during movie watching, recognition memory, and the neural basis of the fMRI-BOLD signal.
This dataset accompanies the following data descriptor: Keles, U., Dubois, J., Le, K.J.M., Tyszka, J.M., Kahn, D.A., Reed, C.M., Chung, J.M., Mamelak, A.N., Adolphs, R. and Rutishauser, U. Multimodal single-neuron, intracranial EEG, and fMRI brain responses during movie watching in human patients. Sci Data 11, 214 (2024). Link to paper
Related code: https://github.com/rutishauserlab/bmovie-release-NWB-BIDS
Intracranial recording data: https://dandiarchive.org/dandiset/000623
~
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Description of the Study Design The running intervention of this study was organized into seven units of about 50-60 minutes each, conducted over a time period of two weeks. The standardized running route lead through a mostly forest area at a local recreation area and was about five kilometers long. The design of this study included two groups of participants who were tested at three time points of assessment. A sample of 68 participants was recruited for this study. Out of this sample, 48 participants completed all required MRI scans and psychometric assessments and participated in the running intervention. We primarily recruited rather unathletic people showing no or only low regular engagement in sports activities. Participants indicated to exercise about half an hour per week (M = 0.53; SD = 1.2). Participants were randomly assigned to two intervention groups, which received the running intervention time-delayed. The first group ("intervention group") performed the running intervention between the first (t1) and the second test session (t2), while the second group ("wait group") received the training between t2 and the third test session (t3).At each time point of assessment (t1, t2, and t3), the German version of the Center for Epidemiological Studies Depression Scale (CES-D; Hautzinger et al., 2012) was administrated to test intervention related changes in depressive symptoms. Available Data / Folder Structure / Data Dictionary The dataset include the MRI data of n=48 participants who completed all three MRI scans (at t1, t2, t3). Specifically, for each single participant (e.g., “sub-season101”) 3 subfolders of MRI data (“ses-1”, “ses-2”, and “ses-3”, for each time point of assessment) are available. The CES-D scores for each participant and time point of assessment (CES-D_1, CES-D_2, CES-D_3) can be found in the “phenotype” folder (“CES-D.tsv”) In addition, the file “participants.tsv” contains the grouping variable (either “1” for training group 1 or “2” for training group 2), along with age (in years), sex (“F” female, “M” male), size (in meter) and weight (in kg) of the participants. For quality assurance we performed the mriqc-pipline (https://mriqc.readthedocs.io/en/latest/) for all subjects, which can be found under der folder “derivatives”/subfolder “mriqc”. The raw BIDS data was created using BIDScoin 3.0.8 All provenance information and settings can be found in ./code/bidscoin For more information see: https://github.com/Donders-Institute/bidscoin
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive contains a raw DICOM dataset acquired (with informed consent) using the ReproIn naming convention on a Siemens Skyra 3T MRI scanner. The dataset includes a T1-weighted anatomical image, four functional runs with the “prettymouth” spoken story stimulus, and one functional run with a block design emotional faces task, as well as auxiliary scans (e.g., scout, soundcheck). The “prettymouth” story stimulus created by Yeshurun et al., 2017 and is available as part of the Narratives collection, and the emotional faces task is similar to Chai et al., 2015. These data are intended for use with the Princeton Handbook for Reproducible Neuroimaging. The handbook provides guidelines for BIDS conversion and execution of BIDS apps (e.g., fMRIPrep, MRIQC). The brain data are contributed by author S.A.N. and are authorized for non-anonymized distribution.