39 datasets found
  1. Princeton Handbook for Reproducible Neuroimaging: Sample Data

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Mar 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samuel A. Nastase; Samuel A. Nastase; Anne C. Mennen; Anne C. Mennen; Paula P. Brooks; Paula P. Brooks; Elizabeth A. McDevitt; Elizabeth A. McDevitt (2020). Princeton Handbook for Reproducible Neuroimaging: Sample Data [Dataset]. http://doi.org/10.5281/zenodo.3677090
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Mar 27, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Samuel A. Nastase; Samuel A. Nastase; Anne C. Mennen; Anne C. Mennen; Paula P. Brooks; Paula P. Brooks; Elizabeth A. McDevitt; Elizabeth A. McDevitt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This archive contains a raw DICOM dataset acquired (with informed consent) using the ReproIn naming convention on a Siemens Skyra 3T MRI scanner. The dataset includes a T1-weighted anatomical image, four functional runs with the “prettymouth” spoken story stimulus, and one functional run with a block design emotional faces task, as well as auxiliary scans (e.g., scout, soundcheck). The “prettymouth” story stimulus created by Yeshurun et al., 2017 and is available as part of the Narratives collection, and the emotional faces task is similar to Chai et al., 2015. These data are intended for use with the Princeton Handbook for Reproducible Neuroimaging. The handbook provides guidelines for BIDS conversion and execution of BIDS apps (e.g., fMRIPrep, MRIQC). The brain data are contributed by author S.A.N. and are authorized for non-anonymized distribution.

  2. BIDS Phenotype Aggregation Example Dataset

    • openneuro.org
    Updated Jun 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samuel Guay; Eric Earl; Hao-Ting Wang; Remi Gau; Dorota Jarecka; David Keator; Melissa Kline Struhl; Satra Ghosh; Louis De Beaumont; Adam G. Thomas (2022). BIDS Phenotype Aggregation Example Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds004130.v1.0.0
    Explore at:
    Dataset updated
    Jun 4, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Samuel Guay; Eric Earl; Hao-Ting Wang; Remi Gau; Dorota Jarecka; David Keator; Melissa Kline Struhl; Satra Ghosh; Louis De Beaumont; Adam G. Thomas
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    BIDS Phenotype Aggregation Example COPY OF "The NIMH Healthy Research Volunteer Dataset" (ds003982)

    Modality-agnostic files were copied over and the CHANGES file was updated. Data was aggregated using:

    python phenotype.py aggregate subject -i segregated_subject -o aggregated_subject

    phenotype.py came from the GitHub repository: https://github.com/ericearl/bids-phenotype

    THE ORIGINAL DATASET ds003982 README FOLLOWS

    A comprehensive clinical, MRI, and MEG collection characterizing healthy research volunteers collected at the National Institute of Mental Health (NIMH) Intramural Research Program (IRP) in Bethesda, Maryland using medical and mental health assessments, diagnostic and dimensional measures of mental health, cognitive and neuropsychological functioning, structural and functional magnetic resonance imaging (MRI), along with diffusion tensor imaging (DTI), and a comprehensive magnetoencephalography battery (MEG).

    In addition, blood samples are currently banked for future genetic analysis. All data collected in this protocol are broadly shared in the OpenNeuro repository, in the Brain Imaging Data Structure (BIDS) format. In addition, blood samples of healthy volunteers are banked for future analyses. All data collected in this protocol are broadly shared here, in the Brain Imaging Data Structure (BIDS) format. In addition, task paradigms and basic pre-processing scripts are shared on GitHub. This dataset is unique in its depth of characterization of a healthy population in terms of brain health and will contribute to a wide array of secondary investigations of non-clinical and clinical research questions.

    This dataset is licensed under the Creative Commons Zero (CC0) v1.0 License.

    Recruitment

    Inclusion criteria for the study require that participants are adults at or over 18 years of age in good health with the ability to read, speak, understand, and provide consent in English. All participants provided electronic informed consent for online screening and written informed consent for all other procedures. Exclusion criteria include:

    • A history of significant or unstable medical or mental health condition requiring treatment
    • Current self-injury, suicidal thoughts or behavior
    • Current illicit drug use by history or urine drug screen
    • Abnormal physical exam or laboratory result at the time of in-person assessment
    • Less than an 8th grade education or IQ below 70
    • Current employees, or first-degree relatives of NIMH employees

    Study participants are recruited through direct mailings, bulletin boards and listservs, outreach exhibits, print advertisements, and electronic media.

    Clinical Measures

    All potential volunteers first visit the study website (https://nimhresearchvolunteer.ctss.nih.gov), check a box indicating consent, and complete preliminary self-report screening questionnaires. The study website is HIPAA compliant and therefore does not collect PII ; instead, participants are instructed to contact the study team to provide their identity and contact information. The questionnaires include demographics, clinical history including medications, disability status (WHODAS 2.0), mental health symptoms (modified DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure), substance use survey (DSM-5 Level 2), alcohol use (AUDIT), handedness (Edinburgh Handedness Inventory), and perceived health ratings. At the conclusion of the questionnaires, participants are again prompted to send an email to the study team. Survey results, supplemented by NIH medical records review (if present), are reviewed by the study team, who determine if the participant is likely eligible for the protocol. These participants are then scheduled for an in-person assessment. Follow-up phone screenings were also used to determine if participants were eligible for in-person screening.

    In-person Assessments

    At this visit, participants undergo a comprehensive clinical evaluation to determine final eligibility to be included as a healthy research volunteer. The mental health evaluation consists of a psychiatric diagnostic interview (Structured Clinical Interview for DSM-5 Disorders (SCID-5), along with self-report surveys of mood (Beck Depression Inventory-II (BD-II) and anxiety (Beck Anxiety Inventory, BAI) symptoms. An intelligence quotient (IQ) estimation is determined with the Kaufman Brief Intelligence Test, Second Edition (KBIT-2). The KBIT-2 is a brief (20-30 minute) assessment of intellectual functioning administered by a trained examiner. There are three subtests, including verbal knowledge, riddles, and matrices.

    Medical Evaluation

    Medical evaluation includes medical history elicitation and systematic review of systems. Biological and physiological measures include vital signs (blood pressure, pulse), as well as weight, height, and BMI. Blood and urine samples are taken and a complete blood count, acute care panel, hepatic panel, thyroid stimulating hormone, viral markers (HCV, HBV, HIV), C-reactive protein, creatine kinase, urine drug screen and urine pregnancy tests are performed. In addition, blood samples that can be used for future genomic analysis, development of lymphoblastic cell lines or other biomarker measures are collected and banked with the NIMH Repository and Genomics Resource (Infinity BiologiX). The Family Interview for Genetic Studies (FIGS) was later added to the assessment in order to provide better pedigree information; the Adverse Childhood Events (ACEs) survey was also added to better characterize potential risk factors for psychopathology. The entirety of the in-person assessment not only collects information relevant for eligibility determination, but it also provides a comprehensive set of standardized clinical measures of volunteer health that can be used for secondary research.

    MRI Scan

    Participants are given the option to consent for a magnetic resonance imaging (MRI) scan, which can serve as a baseline clinical scan to determine normative brain structure, and also as a research scan with the addition of functional sequences (resting state and diffusion tensor imaging). The MR protocol used was initially based on the ADNI-3 basic protocol, but was later modified to include portions of the ABCD protocol in the following manner:

    1. The T1 scan from ADNI3 was replaced by the T1 scan from the ABCD protocol.
    2. The Axial T2 2D FLAIR acquisition from ADNI2 was added, and fat saturation turned on.
    3. Fat saturation was turned on for the pCASL acquisition.
    4. The high-resolution in-plane hippocampal 2D T2 scan was removed and replaced with the whole brain 3D T2 scan from the ABCD protocol (which is resolution and bandwidth matched to the T1 scan).
    5. The slice-select gradient reversal method was turned on for DTI acquisition, and reconstruction interpolation turned off.
    6. Scans for distortion correction were added (reversed-blip scans for DTI and resting state scans).
    7. The 3D FLAIR sequence was made optional and replaced by one where the prescription and other acquisition parameters provide resolution and geometric correspondence between the T1 and T2 scans.

    At the time of the MRI scan, volunteers are administered a subset of tasks from the NIH Toolbox Cognition Battery. The four tasks include:

    1. Flanker inhibitory control and attention task assesses the constructs of attention and executive functioning.
    2. Executive functioning is also assessed using a dimensional change card sort test.
    3. Episodic memory is evaluated using a picture sequence memory test.
    4. Working memory is evaluated using a list sorting test.

    MEG

    An optional MEG study was added to the protocol approximately one year after the study was initiated, thus there are relatively fewer MEG recordings in comparison to the MRI dataset. MEG studies are performed on a 275 channel CTF MEG system (CTF MEG, Coquiltam BC, Canada). The position of the head was localized at the beginning and end of each recording using three fiducial coils. These coils were placed 1.5 cm above the nasion, and at each ear, 1.5 cm from the tragus on a line between the tragus and the outer canthus of the eye. For 48 participants (as of 2/1/2022), photographs were taken of the three coils and used to mark the points on the T1 weighted structural MRI scan for co-registration. For the remainder of the participants (n=16 as of 2/1/2022), a Brainsight neuronavigation system (Rogue Research, Montréal, Québec, Canada) was used to coregister the MRI and fiducial localizer coils in realtime prior to MEG data acquisition.

    Specific Measures within Dataset

    Online and In-person behavioral and clinical measures, along with the corresponding phenotype file name, sorted first by measurement location and then by file name.

    LocationMeasureFile Name
    OnlineAlcohol Use Disorders Identification Test (AUDIT)audit
    Demographicsdemographics
    DSM-5 Level 2 Substance Use - Adultdrug_use
    Edinburgh Handedness Inventory (EHI)ehi
    Health History Formhealth_history_questions
    Perceived Health Rating - selfhealth_rating
  3. Example Dataset for BIDS Manager

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicolas Roehri; Aude Jegou; Samuel Medina Villalon (2023). Example Dataset for BIDS Manager [Dataset]. http://doi.org/10.6084/m9.figshare.11687064.v5
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Nicolas Roehri; Aude Jegou; Samuel Medina Villalon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This folder contains data from a fictional participant that you can use to test BIDS Manager (https://github.com/Dynamap/BIDS_Manager).

  4. BIDS dataset for BIDS Manager-Pipeline

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aude Jegou; Nicolas Roehri; Samuel Medina Villalon (2023). BIDS dataset for BIDS Manager-Pipeline [Dataset]. http://doi.org/10.6084/m9.figshare.19046345.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Aude Jegou; Nicolas Roehri; Samuel Medina Villalon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This folder contains data organised in BIDS format to test BIDS Manager-Pipeline (https://github.com/Dynamap/BIDS_Manager/tree/dev).

  5. Z

    Material related to the blog that reports on the parrot LUT

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Goedhart, J. (2024). Material related to the blog that reports on the parrot LUT [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_1211689
    Explore at:
    Dataset updated
    Aug 2, 2024
    Dataset authored and provided by
    Goedhart, J.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Material related to the blog that reports on the parrot LUT

    The blog was published at the Node: http://thenode.biologists.com/parrot-lut/research/

    -Source

    The ‘morgenstemning’ LUT was originally described in:

    M. Geissbuehler and T. Lasser - "How to display data by color schemes compatible with red-green color perception deficiencies”, Optics Express, 2013

    The ‘inferno’ LUT was originally created by Stéfan van der Walt and Nathaniel Smith (http://bids.github.io/colormap/).

    The ‘pseudocolorMM’ LUT was derived from MetaMorph software (version 7.6).

    The ‘royal’ and ‘Fire’ LUT are available in ImageJ (version 1.49j)

    The ‘parrot’ LUT was designed by Joachim Goedhart and first described here: http://thenode.biologists.com/parrot-lut/research/

    -Distribution

    The colormaps Magma, Inferno, Plasma and Viridis are available under a CC0 "no rights reserved" license (https://creativecommons.org/share-your-work/public-domain/cc0) and are present in FIJI.

    The colormaps Mongenstemning & Parrot are free software: you can redistribute them and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

    These colormaps are distributed in the hope that they will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details: .

  6. MNE-somato-data-bids (anonymized)

    • openneuro.org
    Updated Aug 31, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lauri Parkkonen; Stefan Appelhoff; Alexandre Gramfort; Mainak Jas; Richard Höchenberger (2020). MNE-somato-data-bids (anonymized) [Dataset]. http://doi.org/10.18112/openneuro.ds003104.v1.0.0
    Explore at:
    Dataset updated
    Aug 31, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Lauri Parkkonen; Stefan Appelhoff; Alexandre Gramfort; Mainak Jas; Richard Höchenberger
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    MNE-somato-data-bids

    This dataset contains the MNE-somato-data in BIDS format.

    The conversion can be reproduced through the Python script stored in the /code directory of this dataset. See the README in that directory.

    The /derivatives directory contains the outputs of running the FreeSurfer pipeline recon-all on the MRI data with no additional commandline options (only defaults were used):

    $ recon-all -i sub-01_T1w.nii.gz -s 01 -all

    After the recon-all call, there were further FreeSurfer calls from the MNE API:

    $ mne make_scalp_surfaces -s 01 --force $ mne watershed_bem -s 01

    The derivatives also contain the forward model *-fwd.fif, which was produced using the source space definition, a *-trans.fif file, and the boundary element model (=conductor model) that lives in freesurfer/subjects/01/bem/*-bem-sol.fif.

    The *-trans.fif file is not saved, but can be recovered from the anatomical landmarks in the sub-01/anat/T1w.json file and MNE-BIDS' function get_head_mri_transform.

    See: https://github.com/mne-tools/mne-bids for more information.

    Notes on FreeSurfer

    the FreeSurfer pipeline recon-all was run new for the sake of converting the somato data to BIDS format. This needed to be done to change the "somato" subject name to the BIDS subject label "01". Note, that this is NOT "sub-01", because in BIDS, the "sub-" is just a prefix, whereas the "01" is the subject label.

  7. Princeton Handbook for Reproducible Neuroimaging: Sample Output

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Dec 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elizabeth A. McDevitt; Elizabeth A. McDevitt; Anne C. Mennen; Anne C. Mennen; Paula P. Brooks; Paula P. Brooks; Samuel A. Nastase; Samuel A. Nastase (2020). Princeton Handbook for Reproducible Neuroimaging: Sample Output [Dataset]. http://doi.org/10.5281/zenodo.3727775
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Dec 10, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Elizabeth A. McDevitt; Elizabeth A. McDevitt; Anne C. Mennen; Anne C. Mennen; Paula P. Brooks; Paula P. Brooks; Samuel A. Nastase; Samuel A. Nastase
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This archive contains sample output files for the sample data accompanying the Princeton Handbook for Reproducible Neuroimaging. Outputs include the NIfTI images converted using HeuDiConv (v0.5.dev1) and organized according to the BIDS standard, quality control evaluation using MRIQC (v0.10.4), data preprocessed using fMRIPrep (v1.4.1rc1), and other auxiliary files. All outputs were created according to the procedures outlined in the handbook, and are intended to serve as a didactic reference for use with the handbook. The sample data from which the outputs are derived were acquired (with informed consent) using the ReproIn naming convention on a Siemens Skyra 3T MRI scanner. The sample data include a T1-weighted anatomical image, four functional runs with the “prettymouth” spoken story stimulus, and one functional run with a block design emotional faces task, as well as auxiliary scans (e.g., scout, soundcheck). The “prettymouth” story stimulus created by Yeshurun et al., 2017 and is available as part of the Narratives collection, and the emotional faces task is similar to Chai et al., 2015. The brain data are contributed by author S.A.N. and are authorized for non-anonymized distribution.

  8. STReEF

    • openneuro.org
    Updated Oct 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jelsma S.B.; Zijlmans M.; Heijink I.B.; Hoefnagels F.W.A.; Raemakers M; Bourez-Swart M.D.; Otte W.M; van Blooijs D.; van Klink N.E.C. (2024). STReEF [Dataset]. http://doi.org/10.18112/openneuro.ds005448.v1.0.0
    Explore at:
    Dataset updated
    Oct 8, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Jelsma S.B.; Zijlmans M.; Heijink I.B.; Hoefnagels F.W.A.; Raemakers M; Bourez-Swart M.D.; Otte W.M; van Blooijs D.; van Klink N.E.C.
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset description This dataset is part of a bigger dataset of intracranial EEG (iEEG) called RESPect (Registry for Epilepsy Surgery Patients), a dataset recorded at the University Medical Center of Utrecht, the Netherlands. This dataset consists of 13 patients with long-term recordings (5 patients recorded with electrocorticography and 8 patients recorded with stereo-encephalography. For a detailed description see Jelsma S.B. et al 2024, Structural and effective brain connectivity in focal epilepsy.

    This data is organized according to the Brain Imaging Data Structure specification: A community-driven specification for organizing neurophysiology data along with its metadata. For more information on this data specification, see https://bids-specification.readthedocs.io/en/stable/

    Each patient has their own folder (e.g., sub-STREEF01) which contains the iEEG recordings of that patient, as well as the metadata to understand the raw data and event timing.

    In long-term recordings, data that are recorded within one monitoring period are logically grouped in the same BIDS session and stored across runs indicating the day and time point of recording in the monitoring period. We use the optional run key-value pair to specify the day and the start time of the recording (e.g. run-021315, day 2 after implantation, which is day 1 of the monitoring period, at 13:15). The task key-value pair in long-term iEEG recordings describes the patient´s state during the recording of this file. A specific task called “SPESclin“ is defined when the clinical SPES protocol has been performed.

    License This dataset is made available under the Public Domain Dedication and License CC v1.0, whose full text can be found at https://creativecommons.org/publicdomain/zero/1.0/. We hope that all users will follow the ODC Attribution/Share-Alike Community Norms (http://www.opendatacommons.org/norms/odc-by-sa/). In particular, while not legally required, we hope that all users of the data will acknowledge by citing: 1. Demuru M, van Blooijs D, Zweiphenning W, Hermes D, Leijten F, Zijlmans M, on behalf of the RESPect group. “A practical workflow for organizing clinical intraoperative and long-term iEEG data in BIDS“, published in NeuroInformatics in 2022 2. Jelsma S.B. et al 2024, Structural and effective brain connectivity in focal epilepsy in any publications.

    Code available at: https://github.com/UMCU-EpiLAB/umcuEpi_CCEP_DTI.

    Acknowledgements We thank the SEIN-UMCU RESPect database group (C.J.J. van Asch, L. van de Berg, S. Blok, M.D. Bourez, K.P.J. Braun, J.W. Dankbaar, C.H. Ferrier, T.A. Gebbink, P.H. Gosselaar, R. van Griethuysen, M.G.G. Hobbelink, F.W.A. Hoefnagels, N.E.C. van Klink, M.A. van ‘t Klooster, G.A.P. deKort, M.H.M. Mantione, A. Muhlebner, J.M. Ophorst, P.C. van Rijen, S.M.A. van der Salm, E.V. Schaft, M.M.J. van Schooneveld, H. Smeding, D. Sun, A. Velders, M.J.E. van Zandvoort, G.J.M. Zijlmans, E. Zuidhoek and J. Zwemmer) for their contributions and help in collecting the data.

  9. Rat_rest_standardRAT

    • openneuro.org
    Updated Nov 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eveline Gelderman; Daphne Naessens; Roel Vrooman; Andor Veltien; Bram Coolen; Joanes Grandjean (2021). Rat_rest_standardRAT [Dataset]. http://doi.org/10.18112/openneuro.ds003928.v1.0.0
    Explore at:
    Dataset updated
    Nov 17, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Eveline Gelderman; Daphne Naessens; Roel Vrooman; Andor Veltien; Bram Coolen; Joanes Grandjean
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The raw BIDS data was created using BIDScoin 3.6.3 All provenance information and settings can be found in ./code/bidscoin For more information see: https://github.com/Donders-Institute/bidscoin

  10. Data from: An open-access accelerated adult equivalent of the ABCD Study...

    • openneuro.org
    Updated May 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kristina M. Rapuano; May I. Conley; Anthony C. Juliano; Gregory M. Conan; Maria T. Maza; Kylie Woodman; Steven A. Martinez; Eric Earl; Anders Perrone; Eric Feczko; Damien A. Fair; Richard Watts; BJ Casey; Monica D. Rosenberg (2022). An open-access accelerated adult equivalent of the ABCD Study neuroimaging dataset (a-ABCD) [Dataset]. http://doi.org/10.18112/openneuro.ds004097.v1.0.1
    Explore at:
    Dataset updated
    May 2, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Kristina M. Rapuano; May I. Conley; Anthony C. Juliano; Gregory M. Conan; Maria T. Maza; Kylie Woodman; Steven A. Martinez; Eric Earl; Anders Perrone; Eric Feczko; Damien A. Fair; Richard Watts; BJ Casey; Monica D. Rosenberg
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This data collection from Yale University's Fundamentals of the Adolescent Brain (FAB) Lab contains an accelerated adult equivalent of the ABCD Study(R) neuroimaging dataset. The Brain Imaging Data Structure (BIDS) directory includes 5 sessions for each of 7 participants, spaced approximately 1 week apart.

    BIDS input data were converted from DICOMs using Dcm2Bids (https://github.com/cbedetti/Dcm2Bids). BIDS derivatives data were derived from the OHSU DCAN Labs ABCD-BIDS MRI processing pipeline which outputs Human Connectome Project (HCP) Minimal Preprocessing Pipelines-style data in both volume and surface spaces (https://doi.org/10.5281/zenodo.2587210, https://doi.org/10.1016/j.neuroimage.2013.04.127). This collection is independent from ABCD Data Collection 2573.

  11. EEG Study of the Uncanny Valley Phenomenon

    • zenodo.org
    zip
    Updated Feb 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mihaela Hristova; Laurits Dixen; Laurits Dixen; Paolo Burelli; Paolo Burelli; Mihaela Hristova (2025). EEG Study of the Uncanny Valley Phenomenon [Dataset]. http://doi.org/10.5281/zenodo.14864689
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mihaela Hristova; Laurits Dixen; Laurits Dixen; Paolo Burelli; Paolo Burelli; Mihaela Hristova
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the EEG recordings of 30 participants in a study conducted by the IT University of Copenhagen brAIn lab, designed to investigate the origins of the Uncanny Valley phenomenon. The study is a follow-up to our pilot study on the Uncanny Valley, also available on Zenodo at https://zenodo.org/records/7948158.

    The dataset contains the images that have been shown to the participants, the events, and all the details about the timing and the EEG data. The structure of the dataset follows the Brain Imaging Data Structure specification.

    The dataset can be analysed using the scripts available at https://github.com/itubrainlab/uncanny-valley-eeg-study-full-analysis.

  12. Z

    UCLH Stroke EIT Dataset - Radiology Data

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Goren, Nir (2020). UCLH Stroke EIT Dataset - Radiology Data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_838704
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Dowrick, Thomas
    Holder, David
    Avery, James
    Goren, Nir
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The UCLH Stroke EIT Dataset - Radiology Reports

    Each folder contains the anonymised radiology data, and clinical reports for all patients in the study. The latest version follows the BIDS structure

    Full details on the use of these files are given the repository https://github.com/EIT-team/Stroke_EIT_Dataset

    Version 4 - Latest BIDS version. Use this version for NIFTI files

    Version 3 - Initial BIDS version

    Version 2 - Updated DICOM. Use this version if you wish to use the original DICOM files

    Version 1 - Initial upload

  13. PETfrog

    • openneuro.org
    Updated Jan 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Beatriz Luna (2020). PETfrog [Dataset]. http://doi.org/10.18112/openneuro.ds002385.v1.0.0
    Explore at:
    Dataset updated
    Jan 18, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Beatriz Luna
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Combined PET MR longitudinal study

    Participants with PET (>=18yo at visit 1) have enter the scanner twice on the same day (collapsed within the same BIDS session): * PET:raclopride + MR:functional (tw1 x 2, task x 6, rest x 2, mt x 4) * PET:dtbz + MR:structural (t1w, dsi, r2prime, rest x 1)

    MR only participants (<18yo at visit 1) enter the scanner only once on a given day. Task x 6, rest x 2, mt, r2prime, and DWI are collected.

    At least 2 fieldmaps are collected, before task and before rest. One fieldmap is collected in PET:dtbz session before rest.

    In any session, there will be at most 2 T1w acquisitions. One slow and the other accelerated (G2). For PET participants, there could be up to T1w 4 runs. At return visits, only G2 is collected for MR-only participants.

    BIDS populated with https://github.com/LabNeuroCogDevel/mMR_PETDA/tree/master/bids (/Volumes/Phillips/mMR_PETDA/scripts/bids/)

  14. Data from: Modeling short visual events through the BOLD Moments video fMRI...

    • openneuro.org
    Updated Jul 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Lahner; Kshitij Dwivedi; Polina Iamshchinina; Monika Graumann; Alex Lascelles; Gemma Roig; Alessandro Thomas Gifford; Bowen Pan; SouYoung Jin; N.Apurva Ratan Murty; Kendrick Kay; Radoslaw Cichy*; Aude Oliva* (2024). Modeling short visual events through the BOLD Moments video fMRI dataset and metadata. [Dataset]. http://doi.org/10.18112/openneuro.ds005165.v1.0.4
    Explore at:
    Dataset updated
    Jul 21, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Benjamin Lahner; Kshitij Dwivedi; Polina Iamshchinina; Monika Graumann; Alex Lascelles; Gemma Roig; Alessandro Thomas Gifford; Bowen Pan; SouYoung Jin; N.Apurva Ratan Murty; Kendrick Kay; Radoslaw Cichy*; Aude Oliva*
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the data repository for the BOLD Moments Dataset. This dataset contains brain responses to 1,102 3-second videos across 10 subjects. Each subject saw the 1,000 video training set 3 times and the 102 video testing set 10 times. Each video is additionally human-annotated with 15 object labels, 5 scene labels, 5 action labels, 5 sentence text descriptions, 1 spoken transcription, 1 memorability score, and 1 memorability decay rate.

    Overview of contents:

    The home folder (everything except the derivatives/ folder) contains the raw data in BIDS format before any preprocessing. Download this folder if you want to run your own preprocessing pipeline (e.g., fMRIPrep, HCP pipeline).

    To comply with licensing requirements, the stimulus set is not available here on OpenNeuro (hence the invalid BIDS validation). See the GitHub repository (https://github.com/blahner/BOLDMomentsDataset) to download the stimulus set and stimulus set derivatives (like frames). To make this dataset perfectly BIDS compliant for use with other BIDS-apps, you may need to copy the 'stimuli' folder from the downloaded stimulus set into the parent directory.

    The derivatives folder contains all data derivatives, including the stimulus annotations (./derivatives/stimuli_metadata/annotations.json), model weight checkpoints for a TSM ResNet50 model trained on a subset of Multi-Moments in Time, and prepared beta estimates from two different fMRIPrep preprocessing pipelines (./derivatives/versionA and ./derivatives/versionB).

    VersionA was used in the main manuscript, and versionB is detailed in the manuscript's supplementary. If you are starting a new project, we highly recommend you use the prepared data in ./derivatives/versionB/ because of its better registration, use of GLMsingle, and availability in more standard/non-standard output spaces. Code used in the manuscript is located at the derivatives version level. For example, the code used in the main manuscript is located under ./derivatives/versionA/scripts. Note that versionA prepared data is very large due to beta estimates for 9 TRs per video. See this GitHub repo for starter code demonstrating basic usage and dataset download scripts: https://github.com/blahner/BOLDMomentsDataset. See this GitHub repo for the TSM ResNet50 model training and inference code: https://github.com/pbw-Berwin/M4-pretrained

    Data collection notes: All data collection notes explained below are detailed here for the purpose of full transparency and should be of no concern to researchers using the data i.e. these inconsistencies have been attended to and integrated into the BIDS format as if these exceptions did not occur. The correct pairings between field maps and functional runs are detailed in the .json sidecars accompanying each field map scan.

    Subject 2: Session 1: Subject repositioned head for comfort after the third resting state scan, approximately 1 hour into the session. New scout and field map scans were taken. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Session 4: Completed over two separate days due to subject feeling sleepy. All 3 testing runs and 6/10 training runs were completed on the first day, and the last 4 training runs were completed on the second day. Each of the two days for session 4 had its own field map. This did not interfere with session 5. All scans across both days belonging to session 4 were analyzed as if they were collected on the same day. In the case of applying a susceptibility distortion correction analysis, session 4 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Subject 4: Sessions 1 and 2: The fifth (out of 5) localizer run from session 1 was completed at the end of session 2 due to a technical error. This localizer run therefore used the field map from session 2. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Subject 10: Session 5: Subject moved a lot to readjust earplug after the third functional run (1 test and 2 training runs completed). New field map scans were collected. In the case of applying a susceptibility distortion correction analysis, session 5 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

  15. o

    Data acquired to demonstrate a streamlined approach to mapping and...

    • ora.ox.ac.uk
    zip
    Updated Jan 1, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stone, A; Blockley, N (2016). Data acquired to demonstrate a streamlined approach to mapping and quantifying brain oxygenation using quantitative BOLD [Dataset]. http://doi.org/10.5287/bodleian:E24JbXQwO
    Explore at:
    zip(265290081)Available download formats
    Dataset updated
    Jan 1, 2016
    Dataset provided by
    University of Oxford
    Authors
    Stone, A; Blockley, N
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    This dataset will form the basis of a forthcoming publication regarding streamlining of the quantitative BOLD (qBOLD) approach to measuring brain oxygenation. In the absence of a reference to this publication the methods used are outlined here. Please reference this dataset if you use it in your work.

    Stone AJ, Blockley NP. Data acquired to demonstrate a streamlined approach to mapping and quantifying brain oxygenation using quantitative BOLD. Oxford University Research Archive 2016. doi:

    Summary

    This dataset was acquired during the development of a streamlined qBOLD technique for making measurements of brain oxygenation. The aim here was to see whether confounding partial volume effects of multiple tissue types could be removed using an inversion recovery preparation. Inversion times were optimised to null cerebrospinal fluid (CSF), grey matter (GM) or white matter (WM) at the time of image acquisition [1]. Images were acquired using an Asymmetric Spin Echo (ASE) pulse sequence to introduce varying amount of R2′ weighting to images [2]. R2′ (R-2-prime) is the reversible relaxation rate, a component of transverse signal decay and the reciprocal of T2′ (T-2-prime). Gradient Echo Slice Excitation Profile Imaging (GESEPI) was incorporated into the ASE acquisition to minimise the effect of through-slice magnetic field gradients which would otherwise artificially elevate R2′ [3].

    MRI data

    Images were acquired using a Siemens Magnetom Verio scanner at 3T. The body coil was used for transmission and the manufacturer's 32-channel head coil was used for reception. GESEPI ASE (GASE) data were acquired with a field of view of 240x240 mm2, a 64x64 matrix, ten 5mm slices, TR/TE=3s/74ms and an EPI bandwidth of 2004Hx/px. ASE images are acquired with varying amount of R2′ weighting determined by the spin echo displacement time, tau, i.e. S = S0 exp(-tau R2′) exp(-TE R2). Twenty four values of tau were acquired for each GASE scan: -28, -24, -20, -16, -12, -8, -4, 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60 and 64ms. The GESEPI magnetic field gradient correction technique required each 5mm slice to be encoded into multiple thin partitions each 1.25mm thick. Furthermore, partitions were oversampled by 100% leading to the acquisition of 8 partitions per slice. Oversampled slices were discarded during reconstruction, resulting in 40 slices being acquired for each tau value. To regain signal to noise ratio we suggest summing the slices in blocks of four, therefore resulting in the original ten prescribed slices. A slice selective inversion recovery preparation was used to null the signal of a target tissue compartment. The appropriate inversion time for each compartment was optimised based on literature values for CSF, GM and WM [4], to give values of 1.21s, 0.702s and 0.511s, respectively. In addition, one dataset was acquired without an inversion recovery preparation with the same range of tau values and one dataset was acquired with an expanded foot-head coverage for only tau=0ms - the spin echo - to help with registration. High resolution T1 weighted anatomical images were also acquired for registration and the generation of tissue specific masks. Anatomicals are "defaced" using the shell script in the code directory [5].

    Data curation

    The structure in which this data has been placed is based on the Brain Imaging Data Structure (BIDS) format [6]. However, this format (BIDS version 1.0.0-rc2) does not support ASE data, but we have followed the guiding principles of this specification.

    References

    1. Hajnal JV, Bryant DJ, Kasuboski L, Pattany PM, De Coene B, Lewis PD, Pennock JM, Oatridge A, Young IR, Bydder GM. Use of fluid attenuated inversion recovery (FLAIR) pulse sequences in MRI of the brain. J Comput Assist Tomogr 1992;16:841–844.
    2. Wismer GL, Buxton RB, Rosen BR, Fisel CR, Oot RF, Brady TJ, Davis KR. Susceptibility induced MR line broadening: applications to brain iron mapping. J Comput Assist Tomogr 1988;12:259–265.
    3. Blockley NP, Stone AJ. Improving the specificity of R2′ to the deoxyhaemoglobin content of brain tissue: Prospective correction of macroscopic magnetic field gradients. Neuroimage 2016, in press. doi: 10.1016/j.neuroimage.2016.04.013
    4. Shen Y, Kauppinen RA, Vidyasagar R, Golay X. A functional magnetic resonance imaging technique based on nulling extravascular gray matter signal. Journal of Cerebral Blood Flow & Metabolism 2008;29:144–156. doi: 10.1038/jcbfm.2008.96.
    5. https://github.com/hanke/gumpdata/blob/master/scripts/conversion/convert_dicoms_anatomy
    6. http://bids.neuroimaging.io
  16. BAD: Bilingual Adaptations Dataset

    • openneuro.org
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xuanyi Jessica Chen; Maxwell Salvadore; Esti Blanco-Elorrieta (2025). BAD: Bilingual Adaptations Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds006391.v1.0.0
    Explore at:
    Dataset updated
    Jul 15, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Xuanyi Jessica Chen; Maxwell Salvadore; Esti Blanco-Elorrieta
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    This repository contains raw MRI data of 127 subjects with varying language backgrounds and proficiencies. Below is a detailed outline of the file structure used:


    sub-EBE****

    Each of these directories contain the BIDS formatted anatomical and functional MRI data, with the name of the directory corresponding to the subject's unique identifier.

    For more information on the subdirectories, see BIDS information at https://bids-specification.readthedocs.io/en/stable/appendices/entity-table.html


    derivatives

    This directory contains outputs of common processing pipelines run on the raw MRI data from "data/sub-EBE****".

    derivatives/CAT12

    These are the results of the CAT12 toolbox, which stands for Computational Anatomy Toolbox, and is used to calculate brain region volumes using voxel-based morphometry (VBM). A few things are required to download for this process.

    1. MATLAB v. R2023a (https://www.mathworks.com/products/new_products/release2023a.html)
    2. SPM (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/)
    3. CAT12 (https://neuro-jena.github.io/cat/index.html#DOWNLOAD)

    derivatives/conn

    CONN is used to generate data on functional connectivity from brain fMRI sequences. A few things are required to download for this process.

    1. MATLAB v. R2023a (https://www.mathworks.com/products/new_products/release2023a.html)
    2. SPM (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/)
    3. CAT12 (https://neuro-jena.github.io/cat/index.html#DOWNLOAD)
    4. Conn – MATLAB toolbox for functional connectivity (https://web.conn-toolbox.org/)

    derivatives/fdt

    We used FMRIB's Diffusion Toolbox (FDT) for extracting values from diffusion weighted images. To use FDT, you need to download the following modules through CLI:

    1. module load fsl/6.0.2
    2. module load freesurfer/7.4.1

    For more information on the toolbox, visit https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/index.

    derivatives/fMRIprep

    fMRIprep is the preprocessing of task-based and resting-state functional MRI. We use it to generate data for connectivity.

    We used fMRIprep v23.0.2. For more information, visit https://fmriprep.org/en/stable/index.html.

    derivatives/freesurfer

    FreeSurfer is a software package for the analysis and visualization of structural and functional neuroimaging data, which we use to extract region volumes through surface-based morphometry (SBM).

    We used freesurfer v7.4.1. For more information, visit https://surfer.nmr.mgh.harvard.edu/fswiki.



    analysis/

    This directory contains data and code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted).

    analysis/code

    This directory contains python and R code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted), with each python notebook corresponding to a different part of the paper's analysis. For more details on each file and subdirectories, see "analysis/code/README.md".

    analysis/participant_data

    This directory contains language data on each subject, including a composite multilingualism score from Chen & Blanco-Elorrieta (submitted), information on language knowledge, exposure, mixing, use in education, and family members’ language ability in the participants’ known languages from early childhood to the present day. For more information on the files and their fields, see "analysis/participant_data/metadata.xlsx".

    analysis/processed_mri_data

    This directory contains MRI data, both anatomical and functional, that is the final result of processing raw MRI data. This includes brain volumes, cortical thickness, fractional anisotropy values, and connectivity measures. For more information on the files within this directory, see "analysis/processed_mri_data/metadata.xlsx".

  17. fc-reliability-spinalcord

    • openneuro.org
    Updated Jan 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Merve Kaptan; Falk Eippert (2023). fc-reliability-spinalcord [Dataset]. http://doi.org/10.18112/openneuro.ds004386.v1.0.0
    Explore at:
    Dataset updated
    Jan 22, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Merve Kaptan; Falk Eippert
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    Description

    This data set consists of two sessions of resting-state spinal cord fMRI data from 48 healthy participants. This data set was collected as a part of a larger methodological project (see https://doi.org/10.1002/hbm.26018). Data from these 48 participants have already been shared (https://openneuro.org/datasets/ds004068/versions/1.0.3), but here we provide previously unavailable resting-state fMRI data associated with the following manuscript: PREPRINT LINK. For each participant, we share i) a T2-weighted anatomical image, ii) two resting-state fMRI acquisitions of 250 volumes each (acquired with manual and automated slice-specific z-shimming), and iii) associated peripheral physiological data (ECG and respiratory recordings). For a detailed description, please see:

    PREPRINT LINK

    Citing this dataset

    Should you make use of this data set in any publication, please cite the following article:

    PREPRINT LINK

    License

    This data set is made available under the Creative Commons CC0 license. For more information, see https://creativecommons.org/share-your-work/public-domain/cc0/

    Data set

    This data set is organized according to the Brain Imaging Data Structure (BIDS) specification. For more information on BIDS, see https://bids-specification.readthedocs.io/en/stable/ Each participant’s data are in one subdirectory (e.g., sub-ZS001), which contains the raw NIfTI data (after DICOM to NIfTI conversion) for this particular participant, as well as the associated metadata. Raw and processed peripheral physiological data can be found in each participant’s subdirectory under the “derivatives” folder. Manually obtained or adjusted MRI-based derivatives (e.g., spinal cord masks, segmental labels) are also shared for each participant and can be found in each participant’s subdirectory of the “derivatives” folder. For more details about the preprocessing pipeline and the description of each derivative, please see the following links: https://github.com/eippertlab/restingstate-reliability-spinalcord/ and PREPRINT LINK. Please note that data from three participants (sub-ZS009, sub-ZS018, sub-ZS030) are excluded from all analyses due to technical errors in the acquisition of peripheral physiological data, but their datasets are still provided for the sake of completeness. Should you have any questions about this data set, please contact mkaptan@stanford.edu or eippert@cbs.mpg.de.

  18. Data from: Target processing in overt serial visual search involves the...

    • openneuro.org
    Updated Nov 9, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anja Ischebeck; Hannah Hiebel; Joe Miller; Margit Höfler; Iain D. Gilchrist; Christof Körner (2021). Target processing in overt serial visual search involves the dorsal attention network: A fixation-based event-related fMRI study. [Dataset]. http://doi.org/10.18112/openneuro.ds003470.v2.0.0
    Explore at:
    Dataset updated
    Nov 9, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Anja Ischebeck; Hannah Hiebel; Joe Miller; Margit Höfler; Iain D. Gilchrist; Christof Körner
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A free form text ( README ) describing the dataset in more details that SHOULD be provided

    The raw BIDS data was created using BIDScoin 3.0.8 All provenance information and settings can be found in ./code/bidscoin For more information see: https://github.com/Donders-Institute/bidscoin

  19. fMRI data for: Multimodal single-neuron, intracranial EEG, and fMRI brain...

    • openneuro.org
    Updated Feb 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Umit Keles; Julien Dubois; Kevin J. M. Le; J. Michael Tyszka; David A. Kahn; Chrystal M. Reed; Jeffrey M. Chung; Adam N. Mamelak; Ralph Adolphs; Ueli Rutishauser (2024). fMRI data for: Multimodal single-neuron, intracranial EEG, and fMRI brain responses during movie watching in human patients [Dataset]. http://doi.org/10.18112/openneuro.ds004798.v1.0.5
    Explore at:
    Dataset updated
    Feb 27, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Umit Keles; Julien Dubois; Kevin J. M. Le; J. Michael Tyszka; David A. Kahn; Chrystal M. Reed; Jeffrey M. Chung; Adam N. Mamelak; Ralph Adolphs; Ueli Rutishauser
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    We present a multimodal dataset of intracranial recordings, fMRI, and eye tracking in 20 participants during movie watching. Recordings consist of single neurons, local field potential, and intracranial EEG activity acquired from depth electrodes targeting the amygdala, hippocampus, and medial frontal cortex implanted for monitoring of epileptic seizures. Participants watched an 8-min long excerpt from the video "Bang! You're Dead" and performed a recognition memory test for movie content. 3 T fMRI activity was recorded prior to surgery in 11 of these participants while performing the same task. This NWB- and BIDS-formatted dataset includes spike times, field potential activity, behavior, eye tracking, electrode locations, demographics, and functional and structural MRI scans. For technical validation, we provide signal quality metrics, assess eye tracking quality, behavior, the tuning of cells and high-frequency broadband power field potentials to familiarity and event boundaries, and show brain-wide inter-subject correlations for fMRI. This dataset will facilitate the investigation of brain activity during movie watching, recognition memory, and the neural basis of the fMRI-BOLD signal.

    This dataset accompanies the following data descriptor: Keles, U., Dubois, J., Le, K.J.M., Tyszka, J.M., Kahn, D.A., Reed, C.M., Chung, J.M., Mamelak, A.N., Adolphs, R. and Rutishauser, U. Multimodal single-neuron, intracranial EEG, and fMRI brain responses during movie watching in human patients. Sci Data 11, 214 (2024). Link to paper

    Related code: https://github.com/rutishauserlab/bmovie-release-NWB-BIDS

    Intracranial recording data: https://dandiarchive.org/dandiset/000623

    ~

  20. MRI Lab Graz: A Two-Week Running Intervention Reduces Symptoms Related to...

    • openneuro.org
    Updated Aug 25, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andreas Fink; Karl Koschutnig; Thomas Zussner; Corinna M. Perchtold-Stefan; Christian Rominger; Mathias Benedek; Ilona Papousek (2021). MRI Lab Graz: A Two-Week Running Intervention Reduces Symptoms Related to Depression and Increases Hippocampal Volume in Young Adults [Dataset]. http://doi.org/10.18112/openneuro.ds003799.v1.0.0
    Explore at:
    Dataset updated
    Aug 25, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Andreas Fink; Karl Koschutnig; Thomas Zussner; Corinna M. Perchtold-Stefan; Christian Rominger; Mathias Benedek; Ilona Papousek
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Graz
    Description

    Description of the Study Design The running intervention of this study was organized into seven units of about 50-60 minutes each, conducted over a time period of two weeks. The standardized running route lead through a mostly forest area at a local recreation area and was about five kilometers long. The design of this study included two groups of participants who were tested at three time points of assessment. A sample of 68 participants was recruited for this study. Out of this sample, 48 participants completed all required MRI scans and psychometric assessments and participated in the running intervention. We primarily recruited rather unathletic people showing no or only low regular engagement in sports activities. Participants indicated to exercise about half an hour per week (M = 0.53; SD = 1.2). Participants were randomly assigned to two intervention groups, which received the running intervention time-delayed. The first group ("intervention group") performed the running intervention between the first (t1) and the second test session (t2), while the second group ("wait group") received the training between t2 and the third test session (t3).At each time point of assessment (t1, t2, and t3), the German version of the Center for Epidemiological Studies Depression Scale (CES-D; Hautzinger et al., 2012) was administrated to test intervention related changes in depressive symptoms. Available Data / Folder Structure / Data Dictionary The dataset include the MRI data of n=48 participants who completed all three MRI scans (at t1, t2, t3). Specifically, for each single participant (e.g., “sub-season101”) 3 subfolders of MRI data (“ses-1”, “ses-2”, and “ses-3”, for each time point of assessment) are available. The CES-D scores for each participant and time point of assessment (CES-D_1, CES-D_2, CES-D_3) can be found in the “phenotype” folder (“CES-D.tsv”) In addition, the file “participants.tsv” contains the grouping variable (either “1” for training group 1 or “2” for training group 2), along with age (in years), sex (“F” female, “M” male), size (in meter) and weight (in kg) of the participants. For quality assurance we performed the mriqc-pipline (https://mriqc.readthedocs.io/en/latest/) for all subjects, which can be found under der folder “derivatives”/subfolder “mriqc”. The raw BIDS data was created using BIDScoin 3.0.8 All provenance information and settings can be found in ./code/bidscoin For more information see: https://github.com/Donders-Institute/bidscoin

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Samuel A. Nastase; Samuel A. Nastase; Anne C. Mennen; Anne C. Mennen; Paula P. Brooks; Paula P. Brooks; Elizabeth A. McDevitt; Elizabeth A. McDevitt (2020). Princeton Handbook for Reproducible Neuroimaging: Sample Data [Dataset]. http://doi.org/10.5281/zenodo.3677090
Organization logo

Princeton Handbook for Reproducible Neuroimaging: Sample Data

Explore at:
application/gzipAvailable download formats
Dataset updated
Mar 27, 2020
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Samuel A. Nastase; Samuel A. Nastase; Anne C. Mennen; Anne C. Mennen; Paula P. Brooks; Paula P. Brooks; Elizabeth A. McDevitt; Elizabeth A. McDevitt
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This archive contains a raw DICOM dataset acquired (with informed consent) using the ReproIn naming convention on a Siemens Skyra 3T MRI scanner. The dataset includes a T1-weighted anatomical image, four functional runs with the “prettymouth” spoken story stimulus, and one functional run with a block design emotional faces task, as well as auxiliary scans (e.g., scout, soundcheck). The “prettymouth” story stimulus created by Yeshurun et al., 2017 and is available as part of the Narratives collection, and the emotional faces task is similar to Chai et al., 2015. These data are intended for use with the Princeton Handbook for Reproducible Neuroimaging. The handbook provides guidelines for BIDS conversion and execution of BIDS apps (e.g., fMRIPrep, MRIQC). The brain data are contributed by author S.A.N. and are authorized for non-anonymized distribution.

Search
Clear search
Close search
Google apps
Main menu