19 datasets found
  1. Journals code.

    • plos.figshare.com
    xlsx
    Updated Sep 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pinge Zhao; Xin Zhang; Liandi Dai; Baoguo Ma; Yuting Duan; Yan Xu; Hongmei Wei; Shengwei Wu; Linghui Xiong (2025). Journals code. [Dataset]. http://doi.org/10.1371/journal.pone.0331697.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Sep 2, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Pinge Zhao; Xin Zhang; Liandi Dai; Baoguo Ma; Yuting Duan; Yan Xu; Hongmei Wei; Shengwei Wu; Linghui Xiong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Responsible data sharing in clinical research can enhance the transparency and reproducibility of research evidence, thereby increasing the overall value of research. Since 2024, more than 5,000 journals have adhered to the International Committee of Medical Journal Editors (ICMJE) Data Sharing Statement (DSS) to promote data sharing. However, due to the significant effort required for data sharing and the scarcity of academic rewards, data availability in clinical research remains suboptimal. This study aims to explore the impact of biomedical journal policies and available supporting information on the implementation of data availability in clinical research publications This cross-sectional study will select 303 journals and their latest publications as samples from the biomedical journals listed in the Web of Science Journal Citation Reports based on stratified random sampling according to the 2023 Journal Impact Factor (JIF). Two researchers will independently extract journal data-sharing policies from the submission guidelines of eligible journals and data-sharing details from publications using a pre-designed form from Apr 2025 to Dec 2025. The data sharing levels of publications will be based on the openness of the data-sharing mechanism. Binomial logistic regression analyses will be used to identify potential journal factors that affect publication data-sharing levels. This protocol has been registered in Open Science Framework (OSF) Registries: https://doi.org/10.17605/OSF.IO/EX6DV.

  2. r

    Data from: Open Science Framework

    • rrid.site
    • scicrunch.org
    Updated Dec 22, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2011). Open Science Framework [Dataset]. http://doi.org/10.25504/FAIRsharing.g4z879
    Explore at:
    Dataset updated
    Dec 22, 2011
    Description

    Platform to support research and enable collaboration. Used to discover projects, data, materials, and collaborators helpful to your own research.

  3. Data from: A survey of researchers' needs and priorities for data sharing

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James Harney; Iain Hrynaszkiewicz; Lauren Cadwallader (2023). Data from: A survey of researchers' needs and priorities for data sharing [Dataset]. http://doi.org/10.6084/m9.figshare.13858763.v2
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    James Harney; Iain Hrynaszkiewicz; Lauren Cadwallader
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In May-June 2020 PLOS surveyed researchers from Europe and North America to rate tasks associated with data sharing on (i) their importance to researchers and (ii) researchers' satisfaction with their ability to complete those tasks. Researchers were recruited via direct email campaigns, promoted Facebook and Twitter posts, a post on the PLOS Blog, and emails to industry contacts who distributed the survey on our behalf. Participation was incentivized with 3 random prize draws, which were managed separately to maintain anonymity.This dataset consists of:1) The survey sent to researchers (pdf).2) The anonymised data export of survey results (xlsx).The data export has been processed to retain the anonymity of participants. The comments left in the final question of the survey (question 17) have been removed. Answers to questions 12 to 16 have been recoded to give each answer a numerical value (see 'Scores' tab of spreadsheet). The counts, means, standard deviations and confidence intervals used in the associated manuscript for each factor are given in rows 619-622.Version 2 contains only the completed responses. Completed responses in the version 2 dataset refer to those who answered all the questions in the survey. The version 1 dataset contains a higher number of responses categorised as 'completed' but this has been reviewed for version 2.Version 1 data was used for the preprint: https://doi.org/10.31219/osf.io/njr5u.

  4. Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG...

    • openneuro.org
    Updated May 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elif Isbell; Amanda N. Peters; Dylan M. Richardson; Nancy E. R. De León (2025). Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset [Dataset]. http://doi.org/10.18112/openneuro.ds006018.v1.2.1
    Explore at:
    Dataset updated
    May 13, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Elif Isbell; Amanda N. Peters; Dylan M. Richardson; Nancy E. R. De León
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Cognitive Electrophysiology in Socioeconomic Context in Adulthood Dataset

    Data Description

    This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. The detailed description of the dataset is accepted for publication in Scientific Data, with the title: "Cognitive Electrophysiology in Socioeconomic Context in Adulthood."

    EEG Recording

    EEG data were recorded using the Brain Products actiCHamp Plus system, in combination with BrainVision Recorder (Version 1.25.0101). We used a 32-channel actiCAP slim active electrode system, with electrodes mounted on elastic snap caps (Brain Products GmbH, Gilching, Germany). The ground electrode was placed at FPz. From the electrode bundle, we repurposed 2 electrodes by placing them on the mastoid bones behind the left and right ears to be used for re-referencing after data collection. We also repurposed 3 additional electrodes to record electrooculogram (EOG). To capture eye artifacts, we placed the horizontal EOG (HEOG) electrodes ateral to the external canthus of each eye. We also placed one vertical EOG (VEOG) electrode below the right eye. The remaining 27 electrodes were used as scalp electrodes, which were mounted per the international 10/20 system. EEG data were recorded at a sampling rate of 500 Hz and referenced to the Cz electrode. StimTrak was used to assess stimulus presentation delays for both the monitor and headphones. The results indicated that both the visual and auditory stimuli had a delay of approximately 20 ms. Therefore, users should shift the event-codes by 20 ms when conducting stimulus-locked analyses.

    Notes

    Before the data were publicly shared, all identifiable information was removed, including date of birth, date of session, race/ethnicity, zip code, occupation (self and parent), and names of the languages the participants reported speaking and understanding fluently. Date of birth and date of session were used to compute age in years, which is included in the dataset. Furthermore, several variables were recoded based on re-identification risk assessments. Users who would like to establish secure access to components of the dataset we could not publicly share due to re-identification risks, should contact the corresponding researcher as described below. The dataset consists of participants recruited for studies on adult cognition in context. To provide the largest sample size, we included all participants who completed at least one of the EEG tasks of interest. Each participant completed each EEG task only once. The original participant IDs with which the EEG data were saved were recoded and the raw EEG files were renamed to make the dataset BIDS compatible.

    The ERP CORE experimental tasks can be found on OSF, under Experiment Control Files: https://osf.io/thsqg/

    Examples of EEGLAB/ERPLAB data processing scripts that can be used with the EEG data shared here can be found on OSF:

    osf.io/thsqg osf.io/43H75

    Contact * If you have any questions, comments, or requests, please contact: * Elif Isbell: eisbell@ucmerced.edu

    Copyright and License

    This dataset is licensed under CC0.

    References

    Isbell, E., De León, N. E. R., & Richardson, D. M. (2024). Childhood family socioeconomic status is linked to adult brain electrophysiology. PloS One, 19(8), e0307406.

    Isbell, E., De León, N. E. R. & Richardson, D. M. Childhood family socioeconomic status is linked to adult brain electrophysiology - accompanying analytic data and code. OSF https://doi.org/10.17605/osf.io/43H75 (2024).

    Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465.

    Kappenman, E. S., Farrens, J., Zhang, W., Stewart, A. X. & Luck, S. J. ERP CORE. https://osf.io/thsqg (2020).

    Kappenman, E., Farrens, J., Zhang, W., Stewart, A. & Luck, S. Experiment control files. https://osf.io/47uf2 (2020).

  5. o

    Collaboratory Data on Community Engagement & Public Service in Higher...

    • openicpsr.org
    Updated Mar 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kristin D. Medlin; Manmeet Singh (2021). Collaboratory Data on Community Engagement & Public Service in Higher Education [Dataset]. http://doi.org/10.3886/E136322V5
    Explore at:
    Dataset updated
    Mar 30, 2021
    Dataset provided by
    Collaboratory/Arizona State University Office of Social Embeddedness
    Collaboratory
    Authors
    Kristin D. Medlin; Manmeet Singh
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Collaboratory is a software product developed and maintained by HandsOn Connect Cloud Solutions. It is intended to help higher education institutions accurately and comprehensively track their relationships with the community through engagement and service activities. Institutions that use Collaboratory are given the option to opt-in to a data sharing initiative at the time of onboarding, which grants us permission to de-identify their data and make it publicly available for research purposes. HandsOn Connect is committed to making Collaboratory data accessible to scholars for research, toward the goal of advancing the field of community engagement and social impact.Collaboratory is not a survey, but is instead a dynamic software tool designed to facilitate comprehensive, longitudinal data collection on community engagement and public service activities conducted by faculty, staff, and students in higher education. We provide a standard questionnaire that was developed by Collaboratory’s co-founders (Janke, Medlin, and Holland) in the Institute for Community and Economic Engagement at UNC Greensboro, which continues to be closely monitored and adapted by staff at HandsOn Connect and academic colleagues. It includes descriptive characteristics (what, where, when, with whom, to what end) of activities and invites participants to periodically update their information in accordance with activity progress over time. Examples of individual questions include the focus areas addressed, populations served, on- and off-campus collaborators, connections to teaching and research, and location information, among others.The Collaboratory dataset contains data from 45 institutions beginning in March 2016 and continues to grow as more institutions adopt Collaboratory and continue to expand its use. The data represent over 6,200 published activities (and additional associated content) across our user base.Please cite this data as:Medlin, Kristin and Singh, Manmeet. Dataset on Higher Education Community Engagement and Public Service Activities, 2016-2023. Collaboratory [producer], 2021. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2023-07-07. https://doi.org/10.3886/E136322V1When you cite this data, please also include: Janke, E., Medlin, K., & Holland, B. (2021, November 9). To What End? Ten Years of Collaboratory. https://doi.org/10.31219/osf.io/a27nb

  6. MRI data from 20 adults in response to videos of dialogue and monologue from...

    • openneuro.org
    Updated Feb 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Halie Olson; Emily Chen; Kirsten Lydic; Rebecca Saxe (2023). MRI data from 20 adults in response to videos of dialogue and monologue from Sesame Street [Dataset]. http://doi.org/10.18112/openneuro.ds004467.v1.0.0
    Explore at:
    Dataset updated
    Feb 7, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Halie Olson; Emily Chen; Kirsten Lydic; Rebecca Saxe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Experiment

    20 adult participants (18 participants consented to open data sharing and are included here) watched video clips from Sesame Street, in which the audio was played either forward or reversed. Code and stimuli descriptions shared here: https://osf.io/whsb7/. We also scanned participants on two localizer tasks.

    SS-BlockedLang Language Task (litshort) 2x2 block task design with four conditions: Forward Dialogue, Forward Monologue, Backward Dialogue, and Backward Monologue. Participants were asked to watch the 20-second videos and press a button on an in-scanner button box when they saw a still image of Elmo appear on the screen after each 20-second block. Participants completed 4 runs, each 6 min 18 sec long. Each run contained unique clips, and participants never saw a Forward and Backward version of the same clip. Each run contained 3 sets of 4 blocks, one of each condition (total of 12 blocks), with 22-second rest blocks after each set of 4 blocks. Forward and Backward versions of each clip were counterbalanced between participants (randomly assigned Set A or Set B). Run order was randomized for each participant.

    SS-IntDialog Language Task (litlong) 1–3-minute dialogue clips of Sesame Street in which one character’s audio stream was played Forward and the other was played Backward. Additional sounds in the video (e.g., blowing bubbles, a crash from something falling) were played forwards. Participants watched the videos and pressed a button on an in-scanner button box when they saw a still image of Elmo appear on the screen immediately after each block. Participants completed 2 runs, each approximately 8 min 52 sec long. Each run contained unique clips, and participants never saw a version of the same clip with the Forward/Backward streams reversed. Each run contained 3 clips, 1-3 minutes each, presented in the same order. Between each video, as well as at the beginning and end of the run, there was a 22-second fixation block. Versions of each clip with the opposite character Forward and Backward were counterbalanced between participants (randomly assigned Set A or Set B). 11 participants saw version A, and 9 participants saw version B (1 run from group A was excluded due to participant falling asleep, and one run from group B was excluded due to motion). Run order was randomized for each participant (random sequence 1-2).

    Auditory Language Localizer (langloc) We used a localizer task previously validated for identifying high-level language processing regions (Scott et al., 2017). Participants listened to Intact and Degraded 18-second blocks of speech. The Intact condition consisted of audio clips of spoken English (e.g., clips from interviews in which one person is speaking), and the Degraded condition consisted of acoustically degraded versions of these clips. Participants viewed a black dot on a white background during the task while passively listening to the auditory stimuli. 14-second fixation blocks (no sound) were present after every 4 speech blocks, as well as at the beginning and end of each run (5 fixation blocks per run). Participants completed two runs, each approximately 6 min 6 sec long. Each run contained 16 blocks of speech (8 intact, 8 degraded).

    Theory of Mind Localizer (tomloc) We used a task previously validated for identifying regions that are involved in ToM and social cognition (Dodell-Feder et al., 2011). Participants read short stories in two conditions: False Beliefs and False Photos. Stories in the False Beliefs condition described scenarios in which a character holds a false belief. Stories in the False Photos condition described outdated photographs and maps. Each story was displayed in white text on a black screen for 10 seconds, followed by a 4-second true/false question based on the story (which participants responded to via the in-scanner button box), followed by 12 seconds of a blank screen (rest). Each run contained 10 blocks. Participants completed two runs, each approximately 4 min 40 sec long.

  7. Code and data: Understanding temporal variability across trophic levels and...

    • zenodo.org
    zip
    Updated Oct 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tadeu Siqueira; Tadeu Siqueira; Charles P. Hawkins; Charles P. Hawkins; Julian Olden; Julian Olden; Jonathan Tonkin; Jonathan Tonkin; Lise Comte; Lise Comte; Victor S. Saito; Victor S. Saito; Thomas L. Anderson; Thomas L. Anderson; Gedimar P. Barbosa; Núria Bonada; Núria Bonada; Claudia C. Bonecker; Claudia C. Bonecker; Miguel Cañedo-Argüelles; Miguel Cañedo-Argüelles; Thibault Datry; Thibault Datry; Michael B. Flinn; Pau Fortuño; Pau Fortuño; Gretchen A. Gerrish; Gretchen A. Gerrish; Peter Haase; Peter Haase; Matthew J. Hill; Matthew J. Hill; James M. Hood; James M. Hood; Kaisa-Leena Huttunen; Kaisa-Leena Huttunen; Michael J. Jeffries; Michael J. Jeffries; Timo Muotka; Timo Muotka; Daniel R. O'Donnell; Daniel R. O'Donnell; Riku Paavola; Petr Paril; Petr Paril; Michael J. Paterson; Michael J. Paterson; Christopher J. Patrick; Christopher J. Patrick; Gilmar Perbiche-Neves; Gilmar Perbiche-Neves; Luzia C. Rodrigues; Luzia C. Rodrigues; Susanne C. Schneider; Susanne C. Schneider; Michal Straka; Michal Straka; Albert Ruhi; Albert Ruhi; Gedimar P. Barbosa; Michael B. Flinn; Riku Paavola (2023). Code and data: Understanding temporal variability across trophic levels and spatial scales in freshwater ecosystems [Dataset]. http://doi.org/10.5281/zenodo.8333128
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 30, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tadeu Siqueira; Tadeu Siqueira; Charles P. Hawkins; Charles P. Hawkins; Julian Olden; Julian Olden; Jonathan Tonkin; Jonathan Tonkin; Lise Comte; Lise Comte; Victor S. Saito; Victor S. Saito; Thomas L. Anderson; Thomas L. Anderson; Gedimar P. Barbosa; Núria Bonada; Núria Bonada; Claudia C. Bonecker; Claudia C. Bonecker; Miguel Cañedo-Argüelles; Miguel Cañedo-Argüelles; Thibault Datry; Thibault Datry; Michael B. Flinn; Pau Fortuño; Pau Fortuño; Gretchen A. Gerrish; Gretchen A. Gerrish; Peter Haase; Peter Haase; Matthew J. Hill; Matthew J. Hill; James M. Hood; James M. Hood; Kaisa-Leena Huttunen; Kaisa-Leena Huttunen; Michael J. Jeffries; Michael J. Jeffries; Timo Muotka; Timo Muotka; Daniel R. O'Donnell; Daniel R. O'Donnell; Riku Paavola; Petr Paril; Petr Paril; Michael J. Paterson; Michael J. Paterson; Christopher J. Patrick; Christopher J. Patrick; Gilmar Perbiche-Neves; Gilmar Perbiche-Neves; Luzia C. Rodrigues; Luzia C. Rodrigues; Susanne C. Schneider; Susanne C. Schneider; Michal Straka; Michal Straka; Albert Ruhi; Albert Ruhi; Gedimar P. Barbosa; Michael B. Flinn; Riku Paavola
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Code and data to reproduce the results in Siqueira et al. (submitted) published as a Preprint (https://doi.org/10.32942/osf.io/mpf5x)

    The full set of results, including those made available as supplementary material, can be reproduced by running five scripts in the R_codes folder following this sequence:

    • 01_Dataprep_stability_metrics.R
    • 02_SEM_analyses.R
    • 03_Stab_figs.R
    • 04_Stab_supp_m.R
    • 05_Sensit_analysis.R

    and using the data available in the Input_data folder.

    The original raw data made available include the abundance (individual counts, biomass, coverage area) of a given taxon, at a given site, in a given year. See details here https://doi.org/10.32942/osf.io/mpf5x

    However, this is a collaborative effort and not all authors are allowed to share their raw data. One data set (LEPAS), out of 30, was not made available due to data sharing policies of The Ohio Division of Wildlife (ODOW). So, in code "01_Dataprep_stability_metrics.R" all data made available are imported, except the LEPAS data set. For this specific data set, code "01_Dataprep_stability_metrics.R" imports variability and synchrony components estimated using the methods described in Wang et al. (2019 Ecography; doi/10.1111/ecog.04290), diversity metrics (alpha and gamma diversity), and some variables describing the data set.

    A protocol for requesting access to the LEPAS data sets can be found here:
    https://ael.osu.edu/researchprojects/lake-erie-plankton-abundance-study-lepas

    Dataset owner: Ohio Department of Natural Resources – Division of Wildlife, managed by Jim Hood, Dept. of Evolution, Ecology, and Organismal Biology, The Ohio State University. Email: hood.211@osu.edu

    Anyone who wants to reproduce the results described in the preprint can just download the whole R project (that includes code and data) and run codes from 01 to 05.

    I am making the whole R project folder (with everything needed to reproduce the results) available as a compressed file.

  8. r

    Open data: Visual load effects on the auditory steady-state responses to...

    • demo.researchdata.se
    • researchdata.se
    • +2more
    Updated Nov 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Wiens; Malina Szychowska (2020). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones [Dataset]. http://doi.org/10.17045/STHLMUNI.12582002
    Explore at:
    Dataset updated
    Nov 8, 2020
    Dataset provided by
    Stockholm University
    Authors
    Stefan Wiens; Malina Szychowska
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The main results file are saved separately:

    • ASSR2.html: R output of the main analyses (N = 33)
    • ASSR2_subset.html: R output of the main analyses for the smaller sample (N = 25)

    FIGSHARE METADATA

    Categories

    • Biological psychology
    • Neuroscience and physiological psychology
    • Sensory processes, perception, and performance

    Keywords

    • crossmodal attention
    • electroencephalography (EEG)
    • early-filter theory
    • task difficulty
    • envelope following response

    References

    GENERAL INFORMATION

    1. Title of Dataset: Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones

    2. Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se

      B. Associate or Co-investigator Contact Information Name: Malina Szychowska Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.researchgate.net/profile/Malina_Szychowska Email: malina.szychowska@psychology.su.se

    3. Date of data collection: Subjects (N = 33) were tested between 2019-11-15 and 2020-03-12.

    4. Geographic location of data collection: Department of Psychology, Stockholm, Sweden

    5. Information about funding sources that supported the collection of the data: Swedish Research Council (Vetenskapsrådet) 2015-01181

    SHARING/ACCESS INFORMATION

    1. Licenses/restrictions placed on the data: CC BY 4.0

    2. Links to publications that cite or use the data: Szychowska M., & Wiens S. (2020). Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Submitted manuscript.

    The study was preregistered: https://doi.org/10.17605/OSF.IO/6FHR8

    1. Links to other publicly accessible locations of the data: N/A

    2. Links/relationships to ancillary data sets: N/A

    3. Was data derived from another source? No

    4. Recommended citation for this dataset: Wiens, S., & Szychowska M. (2020). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.12582002

    DATA & FILE OVERVIEW

    File List: The files contain the raw data, scripts, and results of main and supplementary analyses of an electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.

    ASSR2_experiment_scripts.zip: contains the Python files to run the experiment.

    ASSR2_rawdata.zip: contains raw datafiles for each subject

    • data_EEG: EEG data in bdf format (generated by Biosemi)
    • data_log: logfiles of the EEG session (generated by Python)

    ASSR2_EEG_scripts.zip: Python-MNE scripts to process the EEG data

    ASSR2_EEG_preprocessed_data.zip: EEG data in fif format after preprocessing with Python-MNE scripts

    ASSR2_R_scripts.zip: R scripts to analyze the data together with the main datafiles. The main files in the folder are:

    • ASSR2.html: R output of the main analyses
    • ASSR2_subset.html: R output of the main analyses but after excluding eight subjects who were recorded as pilots before preregistering the study

    ASSR2_results.zip: contains all figures and tables that are created by Python-MNE and R.

    METHODOLOGICAL INFORMATION

    1. Description of methods used for collection/generation of data: The auditory stimuli were amplitude-modulated tones with a carrier frequency (fc) of 500 Hz and modulation frequencies (fm) of 20.48 Hz, 40.96 Hz, or 81.92 Hz. The experiment was programmed in python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mn

    The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and saved in .bdf format. For more information, see linked publication.

    1. Methods for processing the data: We conducted frequency analyses and computed event-related potentials. See linked publication

    2. Instrument- or software-specific information needed to interpret the data: MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html# Rstudio used with R (R Core Team, 2020): https://rstudio.com/products/rstudio/ Wiens, S. (2017). Aladins Bayes Factor in R (Version 3). https://www.doi.org/10.17045/sthlmuni.4981154.v3

    3. Standards and calibration information, if appropriate: For information, see linked publication.

    4. Environmental/experimental conditions: For information, see linked publication.

    5. Describe any quality-assurance procedures performed on the data: For information, see linked publication.

    6. People involved with sample collection, processing, analysis and/or submission:

    • Data collection: Malina Szychowska with assistance from Jenny Arctaedius.
    • Data processing, analysis, and submission: Malina Szychowska and Stefan Wiens

    DATA-SPECIFIC INFORMATION: All relevant information can be found in the MNE-Python and R scripts (in EEG_scripts and analysis_scripts folders) that process the raw data. For example, we added notes to explain what different variables mean.

  9. FridayNightLights_Study2

    • openneuro.org
    Updated Jan 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luke Chang; Eshin Jolly; Jin Hyun Cheong; Kristina Rapuano; Nathan Greenstein; Pin-Hao Chen; Jeremy Manning (2022). FridayNightLights_Study2 [Dataset]. http://doi.org/10.18112/openneuro.ds003521.v2.1.0
    Explore at:
    Dataset updated
    Jan 11, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Luke Chang; Eshin Jolly; Jin Hyun Cheong; Kristina Rapuano; Nathan Greenstein; Pin-Hao Chen; Jeremy Manning
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    This dataset is Study2 from:

    Chang, L. J., Jolly, E., Cheong, J. H., Rapuano, K. M., Greenstein, N., Chen, P. H. A., & Manning, J. R. (2021). Endogenous variation in ventromedial prefrontal cortex state dynamics during naturalistic viewing reflects affective experience. Science Advances, 7(17), eabf7129.

    Participants (n=35) watched the first episode of Friday Night Lights while undergoing fMRI.

    We are also sharing additional data from this paper:

    • Study 1 (n=13) fMRI Data - Available on OpenNeuro. Participants watched FNL episode 1 & 2.

    • Study 3 (n=20) Face Expression Data - Available on OSF. We are only sharing extracted Action Unit Values. We do not have permission to share raw video data.

    • Study 4 (n=192) Emotion Ratings - Available on OSF. Rating data was collected on Amazon Mechanical Turk using a custom Flask web application built by Nathan Greenstein.

    Preprocessing

    Results included in this openneuro respository come from preprocessing performed using fMRIPrep 20.2.1 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.5.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502). See derivatives/code/fmriprep.sh for script used to run preprocessing.

    Anatomical data preprocessing

    A total of 1 T1-weighted (T1w) images were found within the input BIDS dataset.The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection (Tustison et al. 2010), distributed with ANTs 2.3.3 (Avants et al. 2008, RRID:SCR_004757), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9, RRID:SCR_002823, Zhang, Brady, and Smith 2001). Volume-based spatial normalization to one standard space (MNI152NLin2009cAsym) was performed through nonlinear registration with antsRegistration (ANTs 2.3.3), using brain-extracted versions of both T1w reference and the T1w template. The following template was selected for spatial normalization: ICBM 152 Nonlinear Asymmetrical template version 2009c [Fonov et al. (2009), RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym],

    Functional data preprocessing

    For each of the 1 BOLD runs found per subject (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Susceptibility distortion correction (SDC) was omitted. The BOLD reference was then co-registered to the T1w reference using flirt (FSL 5.0.9, Jenkinson and Smith 2001) with the boundary-based registration (Greve and Fischl 2009) cost-function. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9, Jenkinson et al. 2002). The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying the transforms to correct for head-motion. These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The BOLD time-series were resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Several confounding time-series were calculated based on the preprocessed BOLD: framewise displacement (FD), DVARS and three region-wise global signals. FD was computed using two formulations following Power (absolute sum of relative motions, Power et al. (2014)) and Jenkinson (relative root mean square displacement between affines, Jenkinson et al. (2002)). FD and DVARS are calculated for each functional run, both using their implementations in Nipype (following the definitions by Power et al. 2014). The three global signals are extracted within the CSF, the WM, and the whole-brain masks. Additionally, a set of physiological regressors were extracted to allow for component-based noise correction (CompCor, Behzadi et al. 2007). Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). tCompCor components are then calculated from the top 2% variable voxels within the brain mask. For aCompCor, three probabilistic masks (CSF, WM and combined CSF+WM) are generated in anatomical space. The implementation differs from that of Behzadi et al. in that instead of eroding the masks by 2 pixels on BOLD space, the aCompCor masks are subtracted a mask of pixels that likely contain a volume fraction of GM. This mask is obtained by thresholding the corresponding partial volume map at 0.05, and it ensures components are not extracted from voxels containing a minimal fraction of GM. Finally, these masks are resampled into BOLD space and binarized by thresholding at 0.99 (as in the original implementation). Components are also calculated separately within the WM and CSF masks. For each CompCor decomposition, the k components with the largest singular values are retained, such that the retained components’ time series are sufficient to explain 50 percent of variance across the nuisance mask (CSF, WM, combined, or temporal). The remaining components are dropped from consideration. The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file. The confound time series derived from head motion estimates and global signals were expanded with the inclusion of temporal derivatives and quadratic terms for each (Satterthwaite et al. 2013). Frames that exceeded a threshold of 0.5 mm FD or 1.5 standardised DVARS were annotated as motion outliers. All resamplings can be performed with a single interpolation step by composing all the pertinent transformations (i.e. head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and output spaces). Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels (Lanczos 1964). Non-gridded (surface) resamplings were performed using mri_vol2surf (FreeSurfer).

    Many internal operations of fMRIPrep use Nilearn 0.6.2 (Abraham et al. 2014, RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep’s documentation.

    Denoising

    We have also performed denoising on the preprocessed data by running a univariate GLM, which included the following regressors:

    • Intercept, Linear & Quatdratic Trends (3)
    • Expanded Realignment Parameters (24). Derivatives, quadratics, quadratic derivatives
    • Spikes based on global outliers greater than 3 SD and also based on average framedifferencing
    • CSF regressor

    See derivatives/code/Denoise_Preprocessed_Data.ipynb file for full details.

    We have included denoised data that is unsmoothed and also has been smoothed with a 6mm FWHM Gaussian kernel.

    We have included nifti and hdf5 versions of the data. HDF5 files are much faster to load if you are using our nltools toolbox for your data analyses.

    Notes

    subject sub-sid000496 had bad normalization using this preprocessing pipeline, so we have not included this participant in the denoised data. We note that we did see this issue using the original preprocessing pipeline reported in the original paper.

  10. Quantitative assessment of research data management practice - University of...

    • zenodo.org
    bin, csv
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frédérique Flamerie; Frédérique Flamerie (2020). Quantitative assessment of research data management practice - University of Bordeaux [Dataset]. http://doi.org/10.5281/zenodo.3241239
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Frédérique Flamerie; Frédérique Flamerie
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Bordeaux
    Description

    This survey was run at the University of Bordeaux in January 2019 using the questionnaire "Quantitative assessment of research data management practice" :

    Teperek, M., Krause, J., Lambeng, N., Blumer, E., van Dijck, J., Eggermont, R., … der Velden, Y. T. (2019). Quantitative assessment of research data management practice. Retrieved from : https://osf.io/mz3fx/

    The questionnaire included all the primary and secondary common questions, institution-specific questions regarding services and file sharing (EPFL questions), institution-specific questions for profile information.

    Data from the 425 responses collected are published here.

    Details regarding data collection and curation are included in the README file.

  11. Representation of navigational affordances and ego-motion in the occipital...

    • openneuro.org
    Updated Jan 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frederik S. Kamps; Emily M. Chen; Nancy Kanwisher; Rebecca Saxe (2025). Representation of navigational affordances and ego-motion in the occipital place area - dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005783.v1.0.1
    Explore at:
    Dataset updated
    Jan 2, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Frederik S. Kamps; Emily M. Chen; Nancy Kanwisher; Rebecca Saxe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    17 adult participants (17 participants consented to open data sharing and are included here) watched short, ~3s video clips of the first-person perspective of walking through rooms. Rooms had a door on either the left, right, or both, and ego-motion was forward, backward, left or right. Code and stimuli descriptions shared here: https://osf.io/6yehp. We also scanned participants on a dynamic scene, face, and object localizer task.

    Main Experiment (EXP) Experimental stimuli consisted of 14 conditions (Conditions 1-10 are shown in Figure 2; Example stimuli and results from conditions 11-14 are shown in Supplemental Figure 3). All stimuli were created using Unity software and depicted 3 second clips of the first-person experience of walking through scenes. Navigational affordances were manipulated by including an open doorway to either the left side, right side, or both sides. To help control for low-level visual confounds, the non-doorway side always included a distractor object, either a painting (conditions 1-10) or an inverted doorway (conditions 11-14). Furthermore, the textures applied to the painting and the walls through the doorways were counterbalanced, such that each texture appeared equally on either side across the full stimulus set. Ego-motion was manipulated by changing the direction of ego-motion through scene, which could either be forward (conditions 1-3, 11-14), backward (conditions 4-6), a left turn (conditions 7-8) or a right turn (conditions 9-10). To help prevent visual adaptation over the course of the experiment, the 14 experimental conditions were counterbalanced across 8 room types, which differed from one another based on the textures applied to the walls, floor and ceiling, and to a lesser extent, by the size and shape of the doorways and corresponding distractor (Figure 5). Stimuli were presented at 13.1 x 18.6 DVA in an event-related paradigm. Each stimulus was presented for 2.5s, followed by a minimum inter-stimulus-interval (ISI) of 3.5s and a maximum ISI of 9.5s, optimized separately for each run using OptSeq2. Participants viewed 4 repetitions of each condition per run, and completed 8 experimental runs, yielding 32 total repetitions per condition across the experiment. To help ensure participants paid attention throughout the experiment, participants performed a one-back task, responding via button press whenever the exact same video stimulus repeated on back-to-back trials. Participants were also instructed to lie still, keep their eyes open, and try to pay attention to and immerse themselves in the stimuli.

    Localizer (LOC) Localizer stimuli consisted of 3s videos of dynamic Scenes, Objects, Faces, and Scrambled Objects, as described previously in Kamps et al., 2016 and 2020. Stimuli were presented using a block design at 13.7 x 18.1 degrees of visual angle. Each run was 315s long and contained 4 blocks per stimulus category. The order of the first set of blocks was pseudorandomized across runs (e.g., Faces, Objects, Scenes, Scrambled) and the order of the second set of blocks was the palindrome of the first (e.g., Scrambled, Scenes, Objects, Faces). Each block consisted of 5 2.8s video clips from a single condition, with an ISI of 0.2s, resulting in 15s blocks. Each run also included 5 fixation blocks: one at the beginning, three evenly spaced throughout the run, and one at the end. Participants completed 3 Localizer runs, interleaved between every 2 Experimental Runs.}

  12. Time-sharing Experiments for the Social Sciences, TESS73 Djupe, The...

    • thearda.com
    Updated Aug 15, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Association of Religion Data Archives (2011). Time-sharing Experiments for the Social Sciences, TESS73 Djupe, The Political Impact of Message Attributes from Religious Elites [Dataset]. http://doi.org/10.17605/OSF.IO/9QGYD
    Explore at:
    Dataset updated
    Aug 15, 2011
    Dataset provided by
    Association of Religion Data Archives
    Dataset funded by
    National Science Foundation
    Description

    TESS conducts general population experiments on behalf of investigators throughout the social sciences. General population experiments allow investigators to assign representative subject populations to experimental conditions of their choosing. Faculty and graduate students from the social sciences and related fields (such as law and public health) propose experiments. A comprehensive, on-line submission and peer review process screens proposals for the importance of their contribution to science and society.

    The study focuses on the affect religious attributes may have on messages about global warming. Respondents will receive information about 1) the religious affiliation of a public official and 2) the way he made his decision to take a stance on global warming. This is a 2x2 between subject design, where the first factor is the source cue (Present/Absent) and the second factor is the decision process (Present/Absent). In total, there are four conditions and respondents are assigned with equal probabilities.

  13. r

    Open data: The early but not the late neural correlate of auditory awareness...

    • researchdata.se
    • demo.researchdata.se
    • +1more
    Updated Jun 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Wiens; Rasmus Eklund; Billy Gerdfeldter (2021). Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences [Dataset]. http://doi.org/10.17045/STHLMUNI.13067018
    Explore at:
    Dataset updated
    Jun 2, 2021
    Dataset provided by
    Stockholm University
    Authors
    Stefan Wiens; Rasmus Eklund; Billy Gerdfeldter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GENERAL INFORMATION

    1. Title of Dataset: Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences.

    2. Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se

      B. Associate or Co-investigator Contact Information Name: Rasmus Eklund Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/raek2031-1.223133 Email: rasmus.eklund@psychology.su.se

      C. Associate or Co-investigator Contact Information Name: Billy Gerdfeldter Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/bige1544-1.403208 Email: billy.gerdfeldter@psychology.su.se

    3. Date of data collection: Subjects (N = 28) were tested between 2020-03-04 and 2020-09-18.

    4. Geographic location of data collection: Department of Psychology, Stockholm, Sweden

    5. Information about funding sources that supported the collection of the data: Marianne and Marcus Wallenberg (Grant 2019-0102)

    SHARING/ACCESS INFORMATION

    1. Licenses/restrictions placed on the data: CC BY 4.0

    2. Links to publications that cite or use the data: Eklund R., Gerdfeldter B., & Wiens S. (2021). The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Neuropsychologia. https://doi.org/

    The study was preregistered: https://doi.org/10.17605/OSF.IO/PSRJF

    1. Links to other publicly accessible locations of the data: N/A

    2. Links/relationships to ancillary data sets: N/A

    3. Was data derived from another source? No

    4. Recommended citation for this dataset: Eklund R., Gerdfeldter B., & Wiens S. (2020). Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.13067018

    DATA & FILE OVERVIEW

    File List: The files contain the downsampled data in bids format, scripts, and results of main and supplementary analyses of the electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.

    AAN_LRclick_experiment_scripts.zip: contains the Python files to run the experiment

    AAN_LRclick_bids_EEG.zip: contains EEG data files for each subject in .eeg format.

    AAN_LRclick_behavior_log.zip: contains log files of the EEG session (generated by Python)

    AAN_LRclick_EEG_scripts.zip: Python-MNE scripts to process and to analyze the EEG data

    AAN_LRclick_results.zip: contains summary data files, figures, and tables that are created by Python-MNE.

    METHODOLOGICAL INFORMATION

    1. Description of methods used for collection/generation of data: The auditory stimuli were 4-ms clicks. The experiment was programmed in Python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mn The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and converted to .eeg format. For more information, see linked publication.

    2. Methods for processing the data: We computed event-related potentials. See linked publication

    3. Instrument- or software-specific information needed to interpret the data: MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html#

    4. Standards and calibration information, if appropriate: For information, see linked publication.

    5. Environmental/experimental conditions: For information, see linked publication.

    6. Describe any quality-assurance procedures performed on the data: For information, see linked publication.

    7. People involved with sample collection, processing, analysis and/or submission:

    • Data collection: Rasmus Eklund with assistance from Billy Gerdfeldter.
    • Data processing, analysis, and submission: Rasmus Eklund

    DATA-SPECIFIC INFORMATION: All relevant information can be found in the MNE-Python scripts (in EEG_scripts folder) that process the EEG data. For example, we added notes to explain what different variables mean.

    The folder structure needs to be as follows: AAN_LRclick (main folder) --->data --->--->bids (AAN_LRclick_bids_EEG) --->--->log (AAN_LRclick_behavior_log) --->MNE (AAN_LRclick_EEG_scripts) --->results (AAN_LRclick_results)

    To run the MNE-Python scripts: Anaconda was used with MNE-Python 0.22 (see installation at https://mne.tools/stable/index.html# ). For preprocess.py and analysis.py, the complete scripts should be run (from anaconda prompt).

  14. fMRI dataset: Violations of psychological and physical expectations in human...

    • openneuro.org
    Updated Jan 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shari Liu; Kirsten Lydic; Lingjie Mei; Rebecca Saxe (2024). fMRI dataset: Violations of psychological and physical expectations in human adult brains [Dataset]. http://doi.org/10.18112/openneuro.ds004934.v1.0.0
    Explore at:
    Dataset updated
    Jan 17, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shari Liu; Kirsten Lydic; Lingjie Mei; Rebecca Saxe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset description

    This dataset contains fMRI data from adults from one paper, with two experiments in it:

    Liu, S., Lydic, K., Mei, L., & Saxe, R. (in press, Imaging Neuroscience). Violations of physical and psychological expectations in the human adult brain. Preprint: https://doi.org/10.31234/osf.io/54x6b

    All subjects who contributed data to this repository consented explicitly to share their de-faced brain images publicly on OpenNeuro. Experiment 1 has 16 subjects who gave consent to share (17 total), and Experiment 2 has 29 subjects who gave consent to share (32 total). Experiment 1 subjects have subject IDs starting with "SAXNES*", and Experiment 2 subjects have subject IDs starting with "SAXNES2*".

    • code/ contains contrast files used in published work
    • sub-SAXNES*/ contains anatomical and functional images, and event files for each functional image. Event files contains the onset, duration, and condition labels
    • CHANGES will be logged in this file

    Tasks

    • VOE (Experiment 1 version): Novel task using hand-crafted stimuli from developmental psychology, showing violations of object solidity and support, and violations of goal-directed and efficient action. There were only 4 sets of stimuli in this experiment, that repeated across runs. Shown in mini-blocks of familization + two test events.
    • VOE (Experiment 2 version): Novel task including all stimuli from Experiment 1 except for support, showing violations of object permanence and continuity (from ADEPT dataset; Smith et al. 2019) and violations of goal-directed and efficient action (from AGENT dataset; Shu et al. 2021). Shown in pairs of familiarization + one test event (either expected or unexpected). All subjects saw one set of stimuli in runs 1-2, and a second set of stimuli in runs 3-4. If someone saw an expected outcome from a scenario in one run, they saw the unexpected outcome from the same scenario in the other run.
    • DOTS (2 runs, both Exp 1-2): Task contrasting social and physical interaction (Fischer et al. 2016, PNAS). Designed to localize regions like the STS and SMG.
    • Motion: Task contrasting coherent and incoherent motion (Robertson et al. 2014, Brain). Designed to localize area MT.
    • spWM: Task contrasting a hard vs easy spatial working memory task (Fedorenko et al., 2013, PNAS). Designed to localize multiple demand regions.

    There are (anonymized) event files associated with each run, subject and task, and contrast files.

    Event files

    All event files, for all tasks, have the following cols: onset_time, duration, trial_type and response_time. Below are notes about subject-specific event files.

    • sub-SAXNES2s001: The original MotionLoc outputs list the first block, 10s into the experiment, as the first event. This was preceded by a 10s fixation. For s001, prior to updating the script to reflect this 10s lag, we had to do some estimation - we saw that on average, each block was 11.8s but there was usually a .05s delay, such that each block started ~11.85s after the previous one. Thus we calculated start times as 11.85 after the previous block. For the rest of the subjects, the outputs were not manipulated - we just added an event to the start of the run.
    • sub-SAXNES2s013: no event files for DOTS run2; event files use approximate timings instead based on inferred information about block order
    • sub-SAXNES2s018 (excluded from sample): no event files, because this subject stopped participating without having contributed a complete, low-motion run, for which it was clear they were following the instructions for the task
    • sub-SAXNES2s019: no time to do run2 of DOTS or Motion, so only 1 run for those two
    • sub-SAXNES2s023, the event files from spWM run 1 did not save during scanning. We use timings from the default settings of condition 1 but we do not have trial-level data from this person.

    For the DOTS and VOE event files from Experiment 1, we have the additional columns:

    • experimentName ('DotsSocPhys' or 'VOESocPhys')
    • correct: at the end of the trial, subs made a response. In DOTS, they indicated whether the dot that disappeared reappeared at a plausible location. In VOE, they pressed a button when the fixation appeared as a cross rather than a plus sign. This col indicates whether the sub responded correctly (1/0)
    • stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/DOTS/xxxx

    For the DOTS event files from Experiment 2, we have the additional columns:

    • participant: redundant with the file name
    • experiment_name: name of the task, redundant with file name
    • block_order: which order the dots trials happened in (1 or 2)
    • prop_correct: the proportion of correct responses over the entire run

    For the Motion event files from Experiment 2, we have the additional columns:

    • experiment_name: name of the task, redundant with file name
    • block_order: which order the dots trials happened in (1 or 2)
    • event: the index of the current event (0-22)

    For the spWM event files from Experiment 2, we have the additional columns:

    • experiment_name: name of the task, redundant with file name
    • participant: redundant with the file name
    • block_order: which order the dots trials happened in (1 or 2)
    • run_accuracy_hard: the proportion of correct responses for the hard trials in this run
    • run_accuracy_easy: the proportion of correct responses for the easy trials in this run

    For the VOE event files from Experiment 2, we have the additional columns:

    • trial_type_specific: identifies trials at one more level of granularity, with respect to domain task and event (e.g. psychology_efficiency_unexp)
    • trial_type_morespecific: similar to trial_type_specific but includes information about domain task scenario and event (e.g. psychology_efficiency_trial-15-over_unexp)
    • experiment_name: name of the task, redundant with file name
    • participant: redundant with the file name
    • correct: whether the response for this trial was correct (1, or 0)
    • time_elapsed: how much time as elapsed by the end of this trial, in ms
    • trial_n: the index of the current event
    • correct_answer: what the correct answer was for the attention check (yes or no)
    • subject_correct: whether the subject in fact was correct in their response
    • event: fam, expected, or unexpected
    • identical_tests: were the test events identical, for this trial?
    • stim_ID: numerical string picking out each unique stimulus
    • scenario_string: string identifying each scenario within each task
    • domain: physics, psychology (psychology-action), both (psychology-environment)
    • task: solidity, permanence, goal, efficiency, infer-constraints, or agent-solidity
    • prop_correct:the proportion of correct responses over the entire run
    • stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/VOE/xxxx

    Associated Links

  15. e

    Open data: Is Auditory Awareness Negativity Confounded by Performance?

    • data.europa.eu
    • demo.researchdata.se
    • +2more
    unknown
    Updated Jun 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stockholms universitet (2020). Open data: Is Auditory Awareness Negativity Confounded by Performance? [Dataset]. https://data.europa.eu/data/datasets/https-doi-org-10-17045-sthlmuni-9724280?locale=cs
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Jun 3, 2020
    Dataset authored and provided by
    Stockholms universitet
    Description

    The main file is performance_correction.html in AAN3_analysis_scripts.zip. It contains the results of the main analyses.

    See AAN3_readme_figshare.txt: 1. Title of Dataset:Open data: Is auditory awareness negativity confounded by performance?

    1. Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se

      B. Associate or Co-investigator Contact Information Name: Rasmus Eklund Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/raek2031-1.223133 Email: rasmus.eklund@psychology.su.se

      C. Associate or Co-investigator Contact Information Name: Billy Gerdfeldter Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/bige1544-1.403208 Email: billy.gerdfeldter@psychology.su.se

    2. Date of data collection: Subjects (N = 28) were tested between 2018-12-03 and 2019-01-18.

    3. Geographic location of data collection: Department of Psychology, Stockholm, Sweden

    4. Information about funding sources that supported the collection of the data: Swedish Research Council / Vetenskapsrådet (Grant 2015-01181) Marianne and Marcus Wallenberg (Grant 2019-0102)

    SHARING/ACCESS INFORMATION

    1. Licenses/restrictions placed on the data: CC BY 4.0

    2. Links to publications that cite or use the data: Eklund R., Gerdfeldter B., & Wiens S. (2020). Is auditory awareness negativity confounded by performance? Consciousness and Cognition. https://doi.org/10.1016/j.concog.2020.102954

    The study was preregistered: https://doi.org/10.17605/OSF.IO/W4U7V

    1. Links to other publicly accessible locations of the data: N/A

    2. Links/relationships to ancillary data sets: N/A

    3. Was data derived from another source? No

    4. Recommended citation for this dataset: Eklund R., Gerdfeldter B., & Wiens S. (2020). Open data: Is auditory awareness negativity confounded by performance? Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.9724280

    DATA & FILE OVERVIEW

    File List: The files contain the raw data, scripts, and results of main and supplementary analyses of the electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.

    AAN3_experiment_scripts.zip: contains the Python files to run the experiment

    AAN3_rawdata_EEG.zip: contains raw EEG data files for each subject in .bdf format (generated by Biosemi)

    AAN3_rawdata_log.zip: contains log files of the EEG session (generated by Python)

    AAN3_EEG_scripts.zip: Python-MNE scripts to process and to analyze the EEG data

    AAN3_EEG_source_localization_scripts.zip: Python-MNE files needed for source localization. The template MRI is provided in this zip. The files are obtained from the MNE tutorial (https://mne.tools/stable/auto_tutorials/source-modeling/plot_eeg_no_mri.html?highlight=template). Note that the stc folder is empty. The source time course files are not provided because of their large size. They can quickly be generated from the analysis script. They are needed for the source localization.

    AAN3_analysis_scripts.zip: R scripts to analyze the data. The main file is performance_correction.html. It contains the results of the main analyses.

    AAN3_results.zip: contains summary data files, figures, and tables that are created by Python-MNE and R.

    METHODOLOGICAL INFORMATION

    1. Description of methods used for collection/generation of data: The auditory stimuli were two 100-ms tones (f = 900 Hz and 1400 Hz, 5 ms fade-in and fade-out). The experiment was programmed in Python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mn The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and saved in .bdf format. For more information, see linked publication.

    2. Methods for processing the data: We computed event-related potentials and source localization. See linked publication

    3. Instrument- or software-specific information needed to interpret the data: MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html# Rstudio used with R (R Core Team, 2016): https://rstudio.com/products/rstudio/ Wiens, S. (2017). Aladins Bayes Factor in R (Version 3). https://www.doi.org/10.17045/sthlmuni.4981154.v3

    4. Standards and calibration information, if appropriate: For information, see linked publication.

    5. Environmental/experimental conditions: For information, see linked publication.

    6. Describe any quality-assurance procedures performed on the data: For information, see linked publication.

    7. People involved with sample collection, processing, analysis and/or submission:

    • Data collection: R
  16. o

    Data from: Studying Characteristic and Identity through Oral Literature in...

    • osf.io
    Updated Feb 19, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tasnim Lubis (2019). Studying Characteristic and Identity through Oral Literature in Malaynese [Dataset]. http://doi.org/10.17605/OSF.IO/GJUD8
    Explore at:
    Dataset updated
    Feb 19, 2019
    Dataset provided by
    Center For Open Science
    Authors
    Tasnim Lubis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In oral literature, the moment that people remember most is could be the way the performer in performing it, the intonation, the history beyond, the particular sayings, or the performer itself. It depends on the listeners’ background about how they achieved. Among of them, oral literature has important role in sharing information among a speech community because the listeners are able to get the message directly without any interpretation. Consequenty, the study of oral literature is not merely study the language as principal but also language use because it is related to the character and identity. In addition, the study tends to have information from native view because it is related to their concept in mind. This study discussed about the concept of oral literature, the role of oral literature of Malaynese in building character and identity, and the role of Antropolinguistik as interdisipliner to analize oral literature in Malaynese

  17. Data from: The search strategy.

    • plos.figshare.com
    xls
    Updated Dec 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Charles Maibvise; Takaedza Munangatire; Nestor Tomas; Daniel O. Ashipala; Priscilla S. Dlamini (2024). The search strategy. [Dataset]. http://doi.org/10.1371/journal.pone.0316106.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 31, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Charles Maibvise; Takaedza Munangatire; Nestor Tomas; Daniel O. Ashipala; Priscilla S. Dlamini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Campaigns to scale up Voluntary Medical Male Circumcision (VMMC) for the prevention of HIV transmission has been going on for years in selected Southern African countries, following recommendations from the World Health Organisations. Despite significant strides made in the initiative and its proven benefits, controversies surrounding the strategy have never ceased, and its future remains uncertain especially as some countries near their initial targets. Over the years, as the campaigns unfolded, a lot of insights have been generated in favour of continuing the VMMC campaigns, although some insights portray the impression that the strategy is not worthy of the risks and effort required, or that enough has been done, as the targets have now been achieved. This article proposes a scoping review that aims at synthesizing and consolidating that evidence into a baseline for a further systematic review aimed at developing sound recommendations for the future of the VMMC strategy for HIV prevention. The scoping review will target all scientific literature published on the Web of Science, Cochrane Library, Scopus, Science Direct, PubMed as well as grey literature from Google Scholar and WHO Institutional Repository for Information Sharing (IRIS) from the inception of the campaigns. The review shall be guided by Arksey and O’Malley’s (2005) framework for scoping reviews, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist shall be followed. Discussion of the findings is envisioned to yield evidence that can be further analysed to give insights about risk/cost-benefits ratios of the strategy at this point in time as well as best clinical practices for the VMMC procedure, to inform the future of the strategy. This protocol is registered with the Open Science Framework, registration ID https://doi.org/10.17605/OSF.IO/SFZC9.

  18. d

    Central bank foreign exchange reserves during the Bretton Woods period - new...

    • search.dataone.org
    Updated Nov 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Naef, Alain (2023). Central bank foreign exchange reserves during the Bretton Woods period - new data from France, the UK and Switzerland [Dataset]. http://doi.org/10.7910/DVN/ODR2LG
    Explore at:
    Dataset updated
    Nov 8, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Naef, Alain
    Area covered
    France, United Kingdom
    Description

    Data for reserves for France, the UK and Switzerland. British and Swiss data at daily frequency. French data at bi-weekly frequency. Data copied directly from archives. For the Bretton Woods period (1945-1971). More details in the data paper available here https://doi.org/10.31235/osf.io/he7gx.

  19. Inclusion and exclusion criteria.

    • plos.figshare.com
    xls
    Updated Dec 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Charles Maibvise; Takaedza Munangatire; Nestor Tomas; Daniel O. Ashipala; Priscilla S. Dlamini (2024). Inclusion and exclusion criteria. [Dataset]. http://doi.org/10.1371/journal.pone.0316106.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 31, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Charles Maibvise; Takaedza Munangatire; Nestor Tomas; Daniel O. Ashipala; Priscilla S. Dlamini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Campaigns to scale up Voluntary Medical Male Circumcision (VMMC) for the prevention of HIV transmission has been going on for years in selected Southern African countries, following recommendations from the World Health Organisations. Despite significant strides made in the initiative and its proven benefits, controversies surrounding the strategy have never ceased, and its future remains uncertain especially as some countries near their initial targets. Over the years, as the campaigns unfolded, a lot of insights have been generated in favour of continuing the VMMC campaigns, although some insights portray the impression that the strategy is not worthy of the risks and effort required, or that enough has been done, as the targets have now been achieved. This article proposes a scoping review that aims at synthesizing and consolidating that evidence into a baseline for a further systematic review aimed at developing sound recommendations for the future of the VMMC strategy for HIV prevention. The scoping review will target all scientific literature published on the Web of Science, Cochrane Library, Scopus, Science Direct, PubMed as well as grey literature from Google Scholar and WHO Institutional Repository for Information Sharing (IRIS) from the inception of the campaigns. The review shall be guided by Arksey and O’Malley’s (2005) framework for scoping reviews, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist shall be followed. Discussion of the findings is envisioned to yield evidence that can be further analysed to give insights about risk/cost-benefits ratios of the strategy at this point in time as well as best clinical practices for the VMMC procedure, to inform the future of the strategy. This protocol is registered with the Open Science Framework, registration ID https://doi.org/10.17605/OSF.IO/SFZC9.

  20. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Pinge Zhao; Xin Zhang; Liandi Dai; Baoguo Ma; Yuting Duan; Yan Xu; Hongmei Wei; Shengwei Wu; Linghui Xiong (2025). Journals code. [Dataset]. http://doi.org/10.1371/journal.pone.0331697.s001
Organization logo

Journals code.

Related Article
Explore at:
xlsxAvailable download formats
Dataset updated
Sep 2, 2025
Dataset provided by
PLOShttp://plos.org/
Authors
Pinge Zhao; Xin Zhang; Liandi Dai; Baoguo Ma; Yuting Duan; Yan Xu; Hongmei Wei; Shengwei Wu; Linghui Xiong
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Responsible data sharing in clinical research can enhance the transparency and reproducibility of research evidence, thereby increasing the overall value of research. Since 2024, more than 5,000 journals have adhered to the International Committee of Medical Journal Editors (ICMJE) Data Sharing Statement (DSS) to promote data sharing. However, due to the significant effort required for data sharing and the scarcity of academic rewards, data availability in clinical research remains suboptimal. This study aims to explore the impact of biomedical journal policies and available supporting information on the implementation of data availability in clinical research publications This cross-sectional study will select 303 journals and their latest publications as samples from the biomedical journals listed in the Web of Science Journal Citation Reports based on stratified random sampling according to the 2023 Journal Impact Factor (JIF). Two researchers will independently extract journal data-sharing policies from the submission guidelines of eligible journals and data-sharing details from publications using a pre-designed form from Apr 2025 to Dec 2025. The data sharing levels of publications will be based on the openness of the data-sharing mechanism. Binomial logistic regression analyses will be used to identify potential journal factors that affect publication data-sharing levels. This protocol has been registered in Open Science Framework (OSF) Registries: https://doi.org/10.17605/OSF.IO/EX6DV.

Search
Clear search
Close search
Google apps
Main menu