Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Responsible data sharing in clinical research can enhance the transparency and reproducibility of research evidence, thereby increasing the overall value of research. Since 2024, more than 5,000 journals have adhered to the International Committee of Medical Journal Editors (ICMJE) Data Sharing Statement (DSS) to promote data sharing. However, due to the significant effort required for data sharing and the scarcity of academic rewards, data availability in clinical research remains suboptimal. This study aims to explore the impact of biomedical journal policies and available supporting information on the implementation of data availability in clinical research publications This cross-sectional study will select 303 journals and their latest publications as samples from the biomedical journals listed in the Web of Science Journal Citation Reports based on stratified random sampling according to the 2023 Journal Impact Factor (JIF). Two researchers will independently extract journal data-sharing policies from the submission guidelines of eligible journals and data-sharing details from publications using a pre-designed form from Apr 2025 to Dec 2025. The data sharing levels of publications will be based on the openness of the data-sharing mechanism. Binomial logistic regression analyses will be used to identify potential journal factors that affect publication data-sharing levels. This protocol has been registered in Open Science Framework (OSF) Registries: https://doi.org/10.17605/OSF.IO/EX6DV.
Facebook
TwitterPlatform to support research and enable collaboration. Used to discover projects, data, materials, and collaborators helpful to your own research.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In May-June 2020 PLOS surveyed researchers from Europe and North America to rate tasks associated with data sharing on (i) their importance to researchers and (ii) researchers' satisfaction with their ability to complete those tasks. Researchers were recruited via direct email campaigns, promoted Facebook and Twitter posts, a post on the PLOS Blog, and emails to industry contacts who distributed the survey on our behalf. Participation was incentivized with 3 random prize draws, which were managed separately to maintain anonymity.This dataset consists of:1) The survey sent to researchers (pdf).2) The anonymised data export of survey results (xlsx).The data export has been processed to retain the anonymity of participants. The comments left in the final question of the survey (question 17) have been removed. Answers to questions 12 to 16 have been recoded to give each answer a numerical value (see 'Scores' tab of spreadsheet). The counts, means, standard deviations and confidence intervals used in the associated manuscript for each factor are given in rows 619-622.Version 2 contains only the completed responses. Completed responses in the version 2 dataset refer to those who answered all the questions in the survey. The version 1 dataset contains a higher number of responses categorised as 'completed' but this has been reviewed for version 2.Version 1 data was used for the preprint: https://doi.org/10.31219/osf.io/njr5u.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. The detailed description of the dataset is accepted for publication in Scientific Data, with the title: "Cognitive Electrophysiology in Socioeconomic Context in Adulthood."
EEG data were recorded using the Brain Products actiCHamp Plus system, in combination with BrainVision Recorder (Version 1.25.0101). We used a 32-channel actiCAP slim active electrode system, with electrodes mounted on elastic snap caps (Brain Products GmbH, Gilching, Germany). The ground electrode was placed at FPz. From the electrode bundle, we repurposed 2 electrodes by placing them on the mastoid bones behind the left and right ears to be used for re-referencing after data collection. We also repurposed 3 additional electrodes to record electrooculogram (EOG). To capture eye artifacts, we placed the horizontal EOG (HEOG) electrodes ateral to the external canthus of each eye. We also placed one vertical EOG (VEOG) electrode below the right eye. The remaining 27 electrodes were used as scalp electrodes, which were mounted per the international 10/20 system. EEG data were recorded at a sampling rate of 500 Hz and referenced to the Cz electrode. StimTrak was used to assess stimulus presentation delays for both the monitor and headphones. The results indicated that both the visual and auditory stimuli had a delay of approximately 20 ms. Therefore, users should shift the event-codes by 20 ms when conducting stimulus-locked analyses.
Before the data were publicly shared, all identifiable information was removed, including date of birth, date of session, race/ethnicity, zip code, occupation (self and parent), and names of the languages the participants reported speaking and understanding fluently. Date of birth and date of session were used to compute age in years, which is included in the dataset. Furthermore, several variables were recoded based on re-identification risk assessments. Users who would like to establish secure access to components of the dataset we could not publicly share due to re-identification risks, should contact the corresponding researcher as described below. The dataset consists of participants recruited for studies on adult cognition in context. To provide the largest sample size, we included all participants who completed at least one of the EEG tasks of interest. Each participant completed each EEG task only once. The original participant IDs with which the EEG data were saved were recoded and the raw EEG files were renamed to make the dataset BIDS compatible.
The ERP CORE experimental tasks can be found on OSF, under Experiment Control Files: https://osf.io/thsqg/
Examples of EEGLAB/ERPLAB data processing scripts that can be used with the EEG data shared here can be found on OSF:
osf.io/thsqg osf.io/43H75
Contact * If you have any questions, comments, or requests, please contact: * Elif Isbell: eisbell@ucmerced.edu
This dataset is licensed under CC0.
Isbell, E., De León, N. E. R., & Richardson, D. M. (2024). Childhood family socioeconomic status is linked to adult brain electrophysiology. PloS One, 19(8), e0307406.
Isbell, E., De León, N. E. R. & Richardson, D. M. Childhood family socioeconomic status is linked to adult brain electrophysiology - accompanying analytic data and code. OSF https://doi.org/10.17605/osf.io/43H75 (2024).
Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465.
Kappenman, E. S., Farrens, J., Zhang, W., Stewart, A. X. & Luck, S. J. ERP CORE. https://osf.io/thsqg (2020).
Kappenman, E., Farrens, J., Zhang, W., Stewart, A. & Luck, S. Experiment control files. https://osf.io/47uf2 (2020).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Collaboratory is a software product developed and maintained by HandsOn Connect Cloud Solutions. It is intended to help higher education institutions accurately and comprehensively track their relationships with the community through engagement and service activities. Institutions that use Collaboratory are given the option to opt-in to a data sharing initiative at the time of onboarding, which grants us permission to de-identify their data and make it publicly available for research purposes. HandsOn Connect is committed to making Collaboratory data accessible to scholars for research, toward the goal of advancing the field of community engagement and social impact.Collaboratory is not a survey, but is instead a dynamic software tool designed to facilitate comprehensive, longitudinal data collection on community engagement and public service activities conducted by faculty, staff, and students in higher education. We provide a standard questionnaire that was developed by Collaboratory’s co-founders (Janke, Medlin, and Holland) in the Institute for Community and Economic Engagement at UNC Greensboro, which continues to be closely monitored and adapted by staff at HandsOn Connect and academic colleagues. It includes descriptive characteristics (what, where, when, with whom, to what end) of activities and invites participants to periodically update their information in accordance with activity progress over time. Examples of individual questions include the focus areas addressed, populations served, on- and off-campus collaborators, connections to teaching and research, and location information, among others.The Collaboratory dataset contains data from 45 institutions beginning in March 2016 and continues to grow as more institutions adopt Collaboratory and continue to expand its use. The data represent over 6,200 published activities (and additional associated content) across our user base.Please cite this data as:Medlin, Kristin and Singh, Manmeet. Dataset on Higher Education Community Engagement and Public Service Activities, 2016-2023. Collaboratory [producer], 2021. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2023-07-07. https://doi.org/10.3886/E136322V1When you cite this data, please also include: Janke, E., Medlin, K., & Holland, B. (2021, November 9). To What End? Ten Years of Collaboratory. https://doi.org/10.31219/osf.io/a27nb
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Experiment
20 adult participants (18 participants consented to open data sharing and are included here) watched video clips from Sesame Street, in which the audio was played either forward or reversed. Code and stimuli descriptions shared here: https://osf.io/whsb7/. We also scanned participants on two localizer tasks.
SS-BlockedLang Language Task (litshort) 2x2 block task design with four conditions: Forward Dialogue, Forward Monologue, Backward Dialogue, and Backward Monologue. Participants were asked to watch the 20-second videos and press a button on an in-scanner button box when they saw a still image of Elmo appear on the screen after each 20-second block. Participants completed 4 runs, each 6 min 18 sec long. Each run contained unique clips, and participants never saw a Forward and Backward version of the same clip. Each run contained 3 sets of 4 blocks, one of each condition (total of 12 blocks), with 22-second rest blocks after each set of 4 blocks. Forward and Backward versions of each clip were counterbalanced between participants (randomly assigned Set A or Set B). Run order was randomized for each participant.
SS-IntDialog Language Task (litlong) 1–3-minute dialogue clips of Sesame Street in which one character’s audio stream was played Forward and the other was played Backward. Additional sounds in the video (e.g., blowing bubbles, a crash from something falling) were played forwards. Participants watched the videos and pressed a button on an in-scanner button box when they saw a still image of Elmo appear on the screen immediately after each block. Participants completed 2 runs, each approximately 8 min 52 sec long. Each run contained unique clips, and participants never saw a version of the same clip with the Forward/Backward streams reversed. Each run contained 3 clips, 1-3 minutes each, presented in the same order. Between each video, as well as at the beginning and end of the run, there was a 22-second fixation block. Versions of each clip with the opposite character Forward and Backward were counterbalanced between participants (randomly assigned Set A or Set B). 11 participants saw version A, and 9 participants saw version B (1 run from group A was excluded due to participant falling asleep, and one run from group B was excluded due to motion). Run order was randomized for each participant (random sequence 1-2).
Auditory Language Localizer (langloc) We used a localizer task previously validated for identifying high-level language processing regions (Scott et al., 2017). Participants listened to Intact and Degraded 18-second blocks of speech. The Intact condition consisted of audio clips of spoken English (e.g., clips from interviews in which one person is speaking), and the Degraded condition consisted of acoustically degraded versions of these clips. Participants viewed a black dot on a white background during the task while passively listening to the auditory stimuli. 14-second fixation blocks (no sound) were present after every 4 speech blocks, as well as at the beginning and end of each run (5 fixation blocks per run). Participants completed two runs, each approximately 6 min 6 sec long. Each run contained 16 blocks of speech (8 intact, 8 degraded).
Theory of Mind Localizer (tomloc) We used a task previously validated for identifying regions that are involved in ToM and social cognition (Dodell-Feder et al., 2011). Participants read short stories in two conditions: False Beliefs and False Photos. Stories in the False Beliefs condition described scenarios in which a character holds a false belief. Stories in the False Photos condition described outdated photographs and maps. Each story was displayed in white text on a black screen for 10 seconds, followed by a 4-second true/false question based on the story (which participants responded to via the in-scanner button box), followed by 12 seconds of a blank screen (rest). Each run contained 10 blocks. Participants completed two runs, each approximately 4 min 40 sec long.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Code and data to reproduce the results in Siqueira et al. (submitted) published as a Preprint (https://doi.org/10.32942/osf.io/mpf5x)
The full set of results, including those made available as supplementary material, can be reproduced by running five scripts in the R_codes folder following this sequence:
and using the data available in the Input_data folder.
The original raw data made available include the abundance (individual counts, biomass, coverage area) of a given taxon, at a given site, in a given year. See details here https://doi.org/10.32942/osf.io/mpf5x
However, this is a collaborative effort and not all authors are allowed to share their raw data. One data set (LEPAS), out of 30, was not made available due to data sharing policies of The Ohio Division of Wildlife (ODOW). So, in code "01_Dataprep_stability_metrics.R" all data made available are imported, except the LEPAS data set. For this specific data set, code "01_Dataprep_stability_metrics.R" imports variability and synchrony components estimated using the methods described in Wang et al. (2019 Ecography; doi/10.1111/ecog.04290), diversity metrics (alpha and gamma diversity), and some variables describing the data set.
A protocol for requesting access to the LEPAS data sets can be found here:
https://ael.osu.edu/researchprojects/lake-erie-plankton-abundance-study-lepas
Dataset owner: Ohio Department of Natural Resources – Division of Wildlife, managed by Jim Hood, Dept. of Evolution, Ecology, and Organismal Biology, The Ohio State University. Email: hood.211@osu.edu
Anyone who wants to reproduce the results described in the preprint can just download the whole R project (that includes code and data) and run codes from 01 to 05.
I am making the whole R project folder (with everything needed to reproduce the results) available as a compressed file.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The main results file are saved separately:
FIGSHARE METADATA
Categories
Keywords
References
GENERAL INFORMATION
Title of Dataset: Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones
Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se
B. Associate or Co-investigator Contact Information Name: Malina Szychowska Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.researchgate.net/profile/Malina_Szychowska Email: malina.szychowska@psychology.su.se
Date of data collection: Subjects (N = 33) were tested between 2019-11-15 and 2020-03-12.
Geographic location of data collection: Department of Psychology, Stockholm, Sweden
Information about funding sources that supported the collection of the data: Swedish Research Council (Vetenskapsrådet) 2015-01181
SHARING/ACCESS INFORMATION
Licenses/restrictions placed on the data: CC BY 4.0
Links to publications that cite or use the data: Szychowska M., & Wiens S. (2020). Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Submitted manuscript.
The study was preregistered: https://doi.org/10.17605/OSF.IO/6FHR8
Links to other publicly accessible locations of the data: N/A
Links/relationships to ancillary data sets: N/A
Was data derived from another source? No
Recommended citation for this dataset: Wiens, S., & Szychowska M. (2020). Open data: Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.12582002
DATA & FILE OVERVIEW
File List: The files contain the raw data, scripts, and results of main and supplementary analyses of an electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.
ASSR2_experiment_scripts.zip: contains the Python files to run the experiment.
ASSR2_rawdata.zip: contains raw datafiles for each subject
ASSR2_EEG_scripts.zip: Python-MNE scripts to process the EEG data
ASSR2_EEG_preprocessed_data.zip: EEG data in fif format after preprocessing with Python-MNE scripts
ASSR2_R_scripts.zip: R scripts to analyze the data together with the main datafiles. The main files in the folder are:
ASSR2_results.zip: contains all figures and tables that are created by Python-MNE and R.
METHODOLOGICAL INFORMATION
The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and saved in .bdf format. For more information, see linked publication.
Methods for processing the data: We conducted frequency analyses and computed event-related potentials. See linked publication
Instrument- or software-specific information needed to interpret the data: MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html# Rstudio used with R (R Core Team, 2020): https://rstudio.com/products/rstudio/ Wiens, S. (2017). Aladins Bayes Factor in R (Version 3). https://www.doi.org/10.17045/sthlmuni.4981154.v3
Standards and calibration information, if appropriate: For information, see linked publication.
Environmental/experimental conditions: For information, see linked publication.
Describe any quality-assurance procedures performed on the data: For information, see linked publication.
People involved with sample collection, processing, analysis and/or submission:
DATA-SPECIFIC INFORMATION: All relevant information can be found in the MNE-Python and R scripts (in EEG_scripts and analysis_scripts folders) that process the raw data. For example, we added notes to explain what different variables mean.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is Study2 from:
Chang, L. J., Jolly, E., Cheong, J. H., Rapuano, K. M., Greenstein, N., Chen, P. H. A., & Manning, J. R. (2021). Endogenous variation in ventromedial prefrontal cortex state dynamics during naturalistic viewing reflects affective experience. Science Advances, 7(17), eabf7129.
Participants (n=35) watched the first episode of Friday Night Lights while undergoing fMRI.
We are also sharing additional data from this paper:
Study 1 (n=13) fMRI Data - Available on OpenNeuro. Participants watched FNL episode 1 & 2.
Study 3 (n=20) Face Expression Data - Available on OSF. We are only sharing extracted Action Unit Values. We do not have permission to share raw video data.
Study 4 (n=192) Emotion Ratings - Available on OSF. Rating data was collected on Amazon Mechanical Turk using a custom Flask web application built by Nathan Greenstein.
Results included in this openneuro respository come from preprocessing performed using fMRIPrep 20.2.1 (Esteban, Markiewicz, et al. (2018); Esteban, Blair, et al. (2018); RRID:SCR_016216), which is based on Nipype 1.5.1 (Gorgolewski et al. (2011); Gorgolewski et al. (2018); RRID:SCR_002502). See derivatives/code/fmriprep.sh for script used to run preprocessing.
A total of 1 T1-weighted (T1w) images were found within the input BIDS dataset.The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection (Tustison et al. 2010), distributed with ANTs 2.3.3 (Avants et al. 2008, RRID:SCR_004757), and used as T1w-reference throughout the workflow. The T1w-reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM) and gray-matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9, RRID:SCR_002823, Zhang, Brady, and Smith 2001). Volume-based spatial normalization to one standard space (MNI152NLin2009cAsym) was performed through nonlinear registration with antsRegistration (ANTs 2.3.3), using brain-extracted versions of both T1w reference and the T1w template. The following template was selected for spatial normalization: ICBM 152 Nonlinear Asymmetrical template version 2009c [Fonov et al. (2009), RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym],
For each of the 1 BOLD runs found per subject (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Susceptibility distortion correction (SDC) was omitted. The BOLD reference was then co-registered to the T1w reference using flirt (FSL 5.0.9, Jenkinson and Smith 2001) with the boundary-based registration (Greve and Fischl 2009) cost-function. Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using mcflirt (FSL 5.0.9, Jenkinson et al. 2002). The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying the transforms to correct for head-motion. These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The BOLD time-series were resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Several confounding time-series were calculated based on the preprocessed BOLD: framewise displacement (FD), DVARS and three region-wise global signals. FD was computed using two formulations following Power (absolute sum of relative motions, Power et al. (2014)) and Jenkinson (relative root mean square displacement between affines, Jenkinson et al. (2002)). FD and DVARS are calculated for each functional run, both using their implementations in Nipype (following the definitions by Power et al. 2014). The three global signals are extracted within the CSF, the WM, and the whole-brain masks. Additionally, a set of physiological regressors were extracted to allow for component-based noise correction (CompCor, Behzadi et al. 2007). Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). tCompCor components are then calculated from the top 2% variable voxels within the brain mask. For aCompCor, three probabilistic masks (CSF, WM and combined CSF+WM) are generated in anatomical space. The implementation differs from that of Behzadi et al. in that instead of eroding the masks by 2 pixels on BOLD space, the aCompCor masks are subtracted a mask of pixels that likely contain a volume fraction of GM. This mask is obtained by thresholding the corresponding partial volume map at 0.05, and it ensures components are not extracted from voxels containing a minimal fraction of GM. Finally, these masks are resampled into BOLD space and binarized by thresholding at 0.99 (as in the original implementation). Components are also calculated separately within the WM and CSF masks. For each CompCor decomposition, the k components with the largest singular values are retained, such that the retained components’ time series are sufficient to explain 50 percent of variance across the nuisance mask (CSF, WM, combined, or temporal). The remaining components are dropped from consideration. The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file. The confound time series derived from head motion estimates and global signals were expanded with the inclusion of temporal derivatives and quadratic terms for each (Satterthwaite et al. 2013). Frames that exceeded a threshold of 0.5 mm FD or 1.5 standardised DVARS were annotated as motion outliers. All resamplings can be performed with a single interpolation step by composing all the pertinent transformations (i.e. head-motion transform matrices, susceptibility distortion correction when available, and co-registrations to anatomical and output spaces). Gridded (volumetric) resamplings were performed using antsApplyTransforms (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels (Lanczos 1964). Non-gridded (surface) resamplings were performed using mri_vol2surf (FreeSurfer).
Many internal operations of fMRIPrep use Nilearn 0.6.2 (Abraham et al. 2014, RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep’s documentation.
We have also performed denoising on the preprocessed data by running a univariate GLM, which included the following regressors:
See derivatives/code/Denoise_Preprocessed_Data.ipynb file for full details.
We have included denoised data that is unsmoothed and also has been smoothed with a 6mm FWHM Gaussian kernel.
We have included nifti and hdf5 versions of the data. HDF5 files are much faster to load if you are using our nltools toolbox for your data analyses.
subject sub-sid000496 had bad normalization using this preprocessing pipeline, so we have not included this participant in the denoised data. We note that we did see this issue using the original preprocessing pipeline reported in the original paper.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This survey was run at the University of Bordeaux in January 2019 using the questionnaire "Quantitative assessment of research data management practice" :
Teperek, M., Krause, J., Lambeng, N., Blumer, E., van Dijck, J., Eggermont, R., … der Velden, Y. T. (2019). Quantitative assessment of research data management practice. Retrieved from : https://osf.io/mz3fx/
The questionnaire included all the primary and secondary common questions, institution-specific questions regarding services and file sharing (EPFL questions), institution-specific questions for profile information.
Data from the 425 responses collected are published here.
Details regarding data collection and curation are included in the README file.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
17 adult participants (17 participants consented to open data sharing and are included here) watched short, ~3s video clips of the first-person perspective of walking through rooms. Rooms had a door on either the left, right, or both, and ego-motion was forward, backward, left or right. Code and stimuli descriptions shared here: https://osf.io/6yehp. We also scanned participants on a dynamic scene, face, and object localizer task.
Main Experiment (EXP) Experimental stimuli consisted of 14 conditions (Conditions 1-10 are shown in Figure 2; Example stimuli and results from conditions 11-14 are shown in Supplemental Figure 3). All stimuli were created using Unity software and depicted 3 second clips of the first-person experience of walking through scenes. Navigational affordances were manipulated by including an open doorway to either the left side, right side, or both sides. To help control for low-level visual confounds, the non-doorway side always included a distractor object, either a painting (conditions 1-10) or an inverted doorway (conditions 11-14). Furthermore, the textures applied to the painting and the walls through the doorways were counterbalanced, such that each texture appeared equally on either side across the full stimulus set. Ego-motion was manipulated by changing the direction of ego-motion through scene, which could either be forward (conditions 1-3, 11-14), backward (conditions 4-6), a left turn (conditions 7-8) or a right turn (conditions 9-10). To help prevent visual adaptation over the course of the experiment, the 14 experimental conditions were counterbalanced across 8 room types, which differed from one another based on the textures applied to the walls, floor and ceiling, and to a lesser extent, by the size and shape of the doorways and corresponding distractor (Figure 5). Stimuli were presented at 13.1 x 18.6 DVA in an event-related paradigm. Each stimulus was presented for 2.5s, followed by a minimum inter-stimulus-interval (ISI) of 3.5s and a maximum ISI of 9.5s, optimized separately for each run using OptSeq2. Participants viewed 4 repetitions of each condition per run, and completed 8 experimental runs, yielding 32 total repetitions per condition across the experiment. To help ensure participants paid attention throughout the experiment, participants performed a one-back task, responding via button press whenever the exact same video stimulus repeated on back-to-back trials. Participants were also instructed to lie still, keep their eyes open, and try to pay attention to and immerse themselves in the stimuli.
Localizer (LOC) Localizer stimuli consisted of 3s videos of dynamic Scenes, Objects, Faces, and Scrambled Objects, as described previously in Kamps et al., 2016 and 2020. Stimuli were presented using a block design at 13.7 x 18.1 degrees of visual angle. Each run was 315s long and contained 4 blocks per stimulus category. The order of the first set of blocks was pseudorandomized across runs (e.g., Faces, Objects, Scenes, Scrambled) and the order of the second set of blocks was the palindrome of the first (e.g., Scrambled, Scenes, Objects, Faces). Each block consisted of 5 2.8s video clips from a single condition, with an ISI of 0.2s, resulting in 15s blocks. Each run also included 5 fixation blocks: one at the beginning, three evenly spaced throughout the run, and one at the end. Participants completed 3 Localizer runs, interleaved between every 2 Experimental Runs.}
Facebook
TwitterTESS conducts general population experiments on behalf of investigators throughout the social sciences. General population experiments allow investigators to assign representative subject populations to experimental conditions of their choosing. Faculty and graduate students from the social sciences and related fields (such as law and public health) propose experiments. A comprehensive, on-line submission and peer review process screens proposals for the importance of their contribution to science and society.
The study focuses on the affect religious attributes may have on messages about global warming. Respondents will receive information about 1) the religious affiliation of a public official and 2) the way he made his decision to take a stance on global warming. This is a 2x2 between subject design, where the first factor is the source cue (Present/Absent) and the second factor is the decision process (Present/Absent). In total, there are four conditions and respondents are assigned with equal probabilities.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
GENERAL INFORMATION
Title of Dataset: Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences.
Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se
B. Associate or Co-investigator Contact Information Name: Rasmus Eklund Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/raek2031-1.223133 Email: rasmus.eklund@psychology.su.se
C. Associate or Co-investigator Contact Information Name: Billy Gerdfeldter Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/bige1544-1.403208 Email: billy.gerdfeldter@psychology.su.se
Date of data collection: Subjects (N = 28) were tested between 2020-03-04 and 2020-09-18.
Geographic location of data collection: Department of Psychology, Stockholm, Sweden
Information about funding sources that supported the collection of the data: Marianne and Marcus Wallenberg (Grant 2019-0102)
SHARING/ACCESS INFORMATION
Licenses/restrictions placed on the data: CC BY 4.0
Links to publications that cite or use the data: Eklund R., Gerdfeldter B., & Wiens S. (2021). The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Neuropsychologia. https://doi.org/
The study was preregistered: https://doi.org/10.17605/OSF.IO/PSRJF
Links to other publicly accessible locations of the data: N/A
Links/relationships to ancillary data sets: N/A
Was data derived from another source? No
Recommended citation for this dataset: Eklund R., Gerdfeldter B., & Wiens S. (2020). Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.13067018
DATA & FILE OVERVIEW
File List: The files contain the downsampled data in bids format, scripts, and results of main and supplementary analyses of the electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.
AAN_LRclick_experiment_scripts.zip: contains the Python files to run the experiment
AAN_LRclick_bids_EEG.zip: contains EEG data files for each subject in .eeg format.
AAN_LRclick_behavior_log.zip: contains log files of the EEG session (generated by Python)
AAN_LRclick_EEG_scripts.zip: Python-MNE scripts to process and to analyze the EEG data
AAN_LRclick_results.zip: contains summary data files, figures, and tables that are created by Python-MNE.
METHODOLOGICAL INFORMATION
Description of methods used for collection/generation of data: The auditory stimuli were 4-ms clicks. The experiment was programmed in Python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mn The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and converted to .eeg format. For more information, see linked publication.
Methods for processing the data: We computed event-related potentials. See linked publication
Instrument- or software-specific information needed to interpret the data: MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html#
Standards and calibration information, if appropriate: For information, see linked publication.
Environmental/experimental conditions: For information, see linked publication.
Describe any quality-assurance procedures performed on the data: For information, see linked publication.
People involved with sample collection, processing, analysis and/or submission:
DATA-SPECIFIC INFORMATION: All relevant information can be found in the MNE-Python scripts (in EEG_scripts folder) that process the EEG data. For example, we added notes to explain what different variables mean.
The folder structure needs to be as follows: AAN_LRclick (main folder) --->data --->--->bids (AAN_LRclick_bids_EEG) --->--->log (AAN_LRclick_behavior_log) --->MNE (AAN_LRclick_EEG_scripts) --->results (AAN_LRclick_results)
To run the MNE-Python scripts: Anaconda was used with MNE-Python 0.22 (see installation at https://mne.tools/stable/index.html# ). For preprocess.py and analysis.py, the complete scripts should be run (from anaconda prompt).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains fMRI data from adults from one paper, with two experiments in it:
Liu, S., Lydic, K., Mei, L., & Saxe, R. (in press, Imaging Neuroscience). Violations of physical and psychological expectations in the human adult brain. Preprint: https://doi.org/10.31234/osf.io/54x6b
All subjects who contributed data to this repository consented explicitly to share their de-faced brain images publicly on OpenNeuro. Experiment 1 has 16 subjects who gave consent to share (17 total), and Experiment 2 has 29 subjects who gave consent to share (32 total). Experiment 1 subjects have subject IDs starting with "SAXNES*", and Experiment 2 subjects have subject IDs starting with "SAXNES2*".
There are (anonymized) event files associated with each run, subject and task, and contrast files.
All event files, for all tasks, have the following cols: onset_time, duration, trial_type and response_time. Below are notes about subject-specific event files.
For the DOTS and VOE event files from Experiment 1, we have the additional columns:
experimentName ('DotsSocPhys' or 'VOESocPhys')correct: at the end of the trial, subs made a response. In DOTS, they indicated whether the dot that disappeared reappeared at a plausible location. In VOE, they pressed a button when the fixation appeared as a cross rather than a plus sign. This col indicates whether the sub responded correctly (1/0)stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/DOTS/xxxxFor the DOTS event files from Experiment 2, we have the additional columns:
participant: redundant with the file nameexperiment_name: name of the task, redundant with file nameblock_order: which order the dots trials happened in (1 or 2)prop_correct: the proportion of correct responses over the entire runFor the Motion event files from Experiment 2, we have the additional columns:
experiment_name: name of the task, redundant with file nameblock_order: which order the dots trials happened in (1 or 2)event: the index of the current event (0-22)For the spWM event files from Experiment 2, we have the additional columns:
experiment_name: name of the task, redundant with file nameparticipant: redundant with the file nameblock_order: which order the dots trials happened in (1 or 2)run_accuracy_hard: the proportion of correct responses for the hard trials in this runrun_accuracy_easy: the proportion of correct responses for the easy trials in this runFor the VOE event files from Experiment 2, we have the additional columns:
trial_type_specific: identifies trials at one more level of granularity, with respect to domain task and event (e.g. psychology_efficiency_unexp)trial_type_morespecific: similar to trial_type_specific but includes information about domain task scenario and event (e.g. psychology_efficiency_trial-15-over_unexp)experiment_name: name of the task, redundant with file nameparticipant: redundant with the file namecorrect: whether the response for this trial was correct (1, or 0)time_elapsed: how much time as elapsed by the end of this trial, in mstrial_n: the index of the current event correct_answer: what the correct answer was for the attention check (yes or no)subject_correct: whether the subject in fact was correct in their responseevent: fam, expected, or unexpectedidentical_tests: were the test events identical, for this trial?stim_ID: numerical string picking out each unique stimulusscenario_string: string identifying each scenario within each taskdomain: physics, psychology (psychology-action), both (psychology-environment)task: solidity, permanence, goal, efficiency, infer-constraints, or agent-solidityprop_correct:the proportion of correct responses over the entire runstim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/VOE/xxxx
Facebook
TwitterThe main file is performance_correction.html in AAN3_analysis_scripts.zip. It contains the results of the main analyses.
See AAN3_readme_figshare.txt: 1. Title of Dataset:Open data: Is auditory awareness negativity confounded by performance?
Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se
B. Associate or Co-investigator Contact Information Name: Rasmus Eklund Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/raek2031-1.223133 Email: rasmus.eklund@psychology.su.se
C. Associate or Co-investigator Contact Information Name: Billy Gerdfeldter Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/bige1544-1.403208 Email: billy.gerdfeldter@psychology.su.se
Date of data collection: Subjects (N = 28) were tested between 2018-12-03 and 2019-01-18.
Geographic location of data collection: Department of Psychology, Stockholm, Sweden
Information about funding sources that supported the collection of the data: Swedish Research Council / Vetenskapsrådet (Grant 2015-01181) Marianne and Marcus Wallenberg (Grant 2019-0102)
SHARING/ACCESS INFORMATION
Licenses/restrictions placed on the data: CC BY 4.0
Links to publications that cite or use the data: Eklund R., Gerdfeldter B., & Wiens S. (2020). Is auditory awareness negativity confounded by performance? Consciousness and Cognition. https://doi.org/10.1016/j.concog.2020.102954
The study was preregistered: https://doi.org/10.17605/OSF.IO/W4U7V
Links to other publicly accessible locations of the data: N/A
Links/relationships to ancillary data sets: N/A
Was data derived from another source? No
Recommended citation for this dataset: Eklund R., Gerdfeldter B., & Wiens S. (2020). Open data: Is auditory awareness negativity confounded by performance? Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.9724280
DATA & FILE OVERVIEW
File List: The files contain the raw data, scripts, and results of main and supplementary analyses of the electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.
AAN3_experiment_scripts.zip: contains the Python files to run the experiment
AAN3_rawdata_EEG.zip: contains raw EEG data files for each subject in .bdf format (generated by Biosemi)
AAN3_rawdata_log.zip: contains log files of the EEG session (generated by Python)
AAN3_EEG_scripts.zip: Python-MNE scripts to process and to analyze the EEG data
AAN3_EEG_source_localization_scripts.zip: Python-MNE files needed for source localization. The template MRI is provided in this zip. The files are obtained from the MNE tutorial (https://mne.tools/stable/auto_tutorials/source-modeling/plot_eeg_no_mri.html?highlight=template). Note that the stc folder is empty. The source time course files are not provided because of their large size. They can quickly be generated from the analysis script. They are needed for the source localization.
AAN3_analysis_scripts.zip: R scripts to analyze the data. The main file is performance_correction.html. It contains the results of the main analyses.
AAN3_results.zip: contains summary data files, figures, and tables that are created by Python-MNE and R.
METHODOLOGICAL INFORMATION
Description of methods used for collection/generation of data: The auditory stimuli were two 100-ms tones (f = 900 Hz and 1400 Hz, 5 ms fade-in and fade-out). The experiment was programmed in Python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mn The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and saved in .bdf format. For more information, see linked publication.
Methods for processing the data: We computed event-related potentials and source localization. See linked publication
Instrument- or software-specific information needed to interpret the data: MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html# Rstudio used with R (R Core Team, 2016): https://rstudio.com/products/rstudio/ Wiens, S. (2017). Aladins Bayes Factor in R (Version 3). https://www.doi.org/10.17045/sthlmuni.4981154.v3
Standards and calibration information, if appropriate: For information, see linked publication.
Environmental/experimental conditions: For information, see linked publication.
Describe any quality-assurance procedures performed on the data: For information, see linked publication.
People involved with sample collection, processing, analysis and/or submission:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In oral literature, the moment that people remember most is could be the way the performer in performing it, the intonation, the history beyond, the particular sayings, or the performer itself. It depends on the listeners’ background about how they achieved. Among of them, oral literature has important role in sharing information among a speech community because the listeners are able to get the message directly without any interpretation. Consequenty, the study of oral literature is not merely study the language as principal but also language use because it is related to the character and identity. In addition, the study tends to have information from native view because it is related to their concept in mind. This study discussed about the concept of oral literature, the role of oral literature of Malaynese in building character and identity, and the role of Antropolinguistik as interdisipliner to analize oral literature in Malaynese
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Campaigns to scale up Voluntary Medical Male Circumcision (VMMC) for the prevention of HIV transmission has been going on for years in selected Southern African countries, following recommendations from the World Health Organisations. Despite significant strides made in the initiative and its proven benefits, controversies surrounding the strategy have never ceased, and its future remains uncertain especially as some countries near their initial targets. Over the years, as the campaigns unfolded, a lot of insights have been generated in favour of continuing the VMMC campaigns, although some insights portray the impression that the strategy is not worthy of the risks and effort required, or that enough has been done, as the targets have now been achieved. This article proposes a scoping review that aims at synthesizing and consolidating that evidence into a baseline for a further systematic review aimed at developing sound recommendations for the future of the VMMC strategy for HIV prevention. The scoping review will target all scientific literature published on the Web of Science, Cochrane Library, Scopus, Science Direct, PubMed as well as grey literature from Google Scholar and WHO Institutional Repository for Information Sharing (IRIS) from the inception of the campaigns. The review shall be guided by Arksey and O’Malley’s (2005) framework for scoping reviews, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist shall be followed. Discussion of the findings is envisioned to yield evidence that can be further analysed to give insights about risk/cost-benefits ratios of the strategy at this point in time as well as best clinical practices for the VMMC procedure, to inform the future of the strategy. This protocol is registered with the Open Science Framework, registration ID https://doi.org/10.17605/OSF.IO/SFZC9.
Facebook
TwitterData for reserves for France, the UK and Switzerland. British and Swiss data at daily frequency. French data at bi-weekly frequency. Data copied directly from archives. For the Bretton Woods period (1945-1971). More details in the data paper available here https://doi.org/10.31235/osf.io/he7gx.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Campaigns to scale up Voluntary Medical Male Circumcision (VMMC) for the prevention of HIV transmission has been going on for years in selected Southern African countries, following recommendations from the World Health Organisations. Despite significant strides made in the initiative and its proven benefits, controversies surrounding the strategy have never ceased, and its future remains uncertain especially as some countries near their initial targets. Over the years, as the campaigns unfolded, a lot of insights have been generated in favour of continuing the VMMC campaigns, although some insights portray the impression that the strategy is not worthy of the risks and effort required, or that enough has been done, as the targets have now been achieved. This article proposes a scoping review that aims at synthesizing and consolidating that evidence into a baseline for a further systematic review aimed at developing sound recommendations for the future of the VMMC strategy for HIV prevention. The scoping review will target all scientific literature published on the Web of Science, Cochrane Library, Scopus, Science Direct, PubMed as well as grey literature from Google Scholar and WHO Institutional Repository for Information Sharing (IRIS) from the inception of the campaigns. The review shall be guided by Arksey and O’Malley’s (2005) framework for scoping reviews, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist shall be followed. Discussion of the findings is envisioned to yield evidence that can be further analysed to give insights about risk/cost-benefits ratios of the strategy at this point in time as well as best clinical practices for the VMMC procedure, to inform the future of the strategy. This protocol is registered with the Open Science Framework, registration ID https://doi.org/10.17605/OSF.IO/SFZC9.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Responsible data sharing in clinical research can enhance the transparency and reproducibility of research evidence, thereby increasing the overall value of research. Since 2024, more than 5,000 journals have adhered to the International Committee of Medical Journal Editors (ICMJE) Data Sharing Statement (DSS) to promote data sharing. However, due to the significant effort required for data sharing and the scarcity of academic rewards, data availability in clinical research remains suboptimal. This study aims to explore the impact of biomedical journal policies and available supporting information on the implementation of data availability in clinical research publications This cross-sectional study will select 303 journals and their latest publications as samples from the biomedical journals listed in the Web of Science Journal Citation Reports based on stratified random sampling according to the 2023 Journal Impact Factor (JIF). Two researchers will independently extract journal data-sharing policies from the submission guidelines of eligible journals and data-sharing details from publications using a pre-designed form from Apr 2025 to Dec 2025. The data sharing levels of publications will be based on the openness of the data-sharing mechanism. Binomial logistic regression analyses will be used to identify potential journal factors that affect publication data-sharing levels. This protocol has been registered in Open Science Framework (OSF) Registries: https://doi.org/10.17605/OSF.IO/EX6DV.