Facebook
TwitterData and variable key for Dunham, Dotsch, Clark, & Stepanova, "The development of White-Asian categorization: Contributions from skin color and other physiognomic cues"
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data repository includes preprocessed EEG files (.mat format, per subject and per experimental block), preprocessed pupillometry files (.mat format, per subject and per experimental block), both raw and preprocessed behavioral data (.mat and .csv format) and RSA results (.mat) collected for the following pre-registered project: https://doi.org/10.17605/OSF.IO/WYSBM
Funding:
MY, BB and JE were supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (819040 - acronym: rid-O). BB is also supported by the EMERGE EIC (Project 101070918). OD is supported by a Volkswagen Stiftung grant (Co-Sense) as well as the EMERGE EIC (Project 101070918).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The DIRECCT study is a multi-phase examination of clinical trial results dissemination during the COVID-19 pandemic.
Interim data for trials completed during the first six months of the pandemic (i.e., 1 January 2020 – 30 June 2020) was previously deposited at https://doi.org/10.5281/zenodo.4669936.
This data deposit comprises the results of searches for trials completed during the first 18-months of the pandemic (i.e., 1 January 2020 – 30 June 2021).
The data structure for the final phase of the project is not identical to the interim data as it was substantially more complex.
The data include datatables (CSVs) that can be treated as relational and joined on the id or trn columns. See datamodel.png for an overview of the data.
Details on data sources and methods for the creation and analysis of this dataset are available in a detailed protocol (Version 3.1, 19 July 2023) : https://osf.io/w8t7r
Note: This repository will be updated with additional information including a codebook and archives of raw data.
Additional information on the project is available at the project's OSF page: https://doi.org/10.17605/osf.io/5f8j2.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository provides the data for the manuscript entitled "Test-Retest Reliability Analysis of Resting-state EEG Measures and Their Association with Long-Term Memory in Children and Adults".
The relevant metadata and descriptors for the data as well as the code for analyzing the data are provided in the accompanying OSF repository:
EEG_Files.zip:
OutputPreprocPRE.zip / OutputPreprocPOST.zip
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Science experiments on the International Space Station (ISS) - and in future space habitats - require “facilities,” as they are known to the space agencies, as the core building blocks of the research capabilities they provide. Facilities are pieces of equipment that enable research, such as centrifuges, furnaces, gloveboxes, freezers, and more. The ISS has been home to 191 such facilities in its 24 years of continuous operation, but until now, there has been no public accounting or modeling of how they are used. Here we present a statistical analysis of ISS facility usage, based on data scraped from more than 4,000 ISS daily reports dating from 2009 to 2024. The results are presented by facility and by facility category (as designated by NASA and as designated by us), and for both individual and pairwise use. By drawing this picture of research activity on the first permanently crewed space habitat in low Earth orbit, we provide insights that are useful for the design and planning of the next generation of space stations, including the co-location of facilities that tend to be used together as part of a broad space-based infrastructure. Raw data and code for replication and further analysis is available in this paper’s Github repository.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Title: Category-biased neural representations form spontaneously during learning that emphasizes memory for specific instances.
fMRI task with four runs of a paired-associates style learning task. Participants were presented with blended face stimuli (nine training faces total) and were instructed to learn the first and last name of the faces. First names were unique to each face but last names were shared across three family categories (Miller, Wilson, and Davis) that were determined via the blending procedure.
Task order: train_run-1, train_run-2, train_run-3, train_run-4
Information related to the task for each trial (onsets, duration of stimulus on the screen, category-relevant and category-irrelevant faces that constitute each blend) are included in a _events.tsv file for each subject and each run of the training task.
Stimuli and other behavioral data can be accessed via OSF repository Blended-face Similarity Ratings and Categorization Tasks: https://osf.io/e8htb/
Facebook
TwitterThis repository contains the raw EEG data sets associated with the following publication:
Klatt, L. I., Begau, A., Schneider, D., Wascher, E., & Getzmann, S. (2023). Cross-modal interactions at the audiovisual cocktail-party revealed by behavior, ERPs, and neural oscillations. NeuroImage, 271, 120022. https://doi.org/10.1016/j.neuroimage.2023.120022
The associated OSF repository, containing the aggregated behavioral and EEG data files (e.g., mean RTs, mean N2ac amplitudes, mean N2pc amplitudes) as well as analysis scripts and JASP files can be found here: https://osf.io/asx5v/
Please see the README file in the OSF repository for further information regarding the shared datasets (e.g., the difference between "coded" and "coded_wo_doubleAK" datasets). The data are shared under the following licence: CC BY-NC-SA 4.0 (https://creativecommons.org/licenses/by-nc-sa/4.0/).
To obtain access to this Zenodo repository, a data user agreement must be signed.
Facebook
TwitterThis collection contains the analytical datasets as reported in the manuscript “Clustering of Adolescents’ Health Behaviors Before and During the COVID-19 Pandemic: Examining Transitions and The Role of Demographics and Parental Health Behaviors”. In total, four different datasets were used in this manuscript for different steps in the analyses that were performed in R and Mplus. The corresponding analytical codes in R and Mplus are publicly available through the Open Science Framework (https://osf.io/8yznm/). For more information on all datasets included in this collection, please see the readme file and the Excel codebook. Data from the "G(F)OOD together" research project were used (https://osf.io/bysgq/), a longitudinal multi-wave study on adolescents' and their parents' (mental) health behaviors collected before and during the COVID-19 pandemic.
Facebook
TwitterThese datasets are generated from a choice experiment, a survey questionnaire, and output from a computational modelling (on R) of these data. The OSF repository also includes all R codes and study materials.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains data and analysis code underlying the paper "User experiences with digital future-self interventions in the contexts of smoking and physical activity: A mixed methods multi-study exploration" by Kristell M. Penfornis, Nele Albers, Willem-Paul Brinkman, Mark A. Neerincx, Andrea W.M. Evers, Winifred A. Gebhardt, and Eline Meijer. This includes 1) the data underlying the analysis for study 2, 2) scripts for data preprocessing and for computing the mean effort per activity for study 2, and 3) the SPSS syntaxes used for the analyses for all three studies.
Data underlying analysis for study 2
The folder "Study_2/Raw_Data" contains the data underlying the analysis for study 2. This includes:
This data is derived from the data published in this repository: https://doi.org/10.4121/1d9aa8eb-9e63-4bf5-98a3-f359dbc932a4.
Scripts for data preprocessing and mean effort computation for study 2
The folder "Study_2" further contains scripts for data preprocessing and for computing the mean effort per activity for study 2.
SPSS syntaxes
The folder "Analysis_Syntax_SPSS" contains the SPSS syntaxes used for the analyses for all three studies.
Additional resources
Facebook
Twitterhttps://www.globaldata.com/privacy-policy/https://www.globaldata.com/privacy-policy/
Office Star Five, spol sro (OSF), a member of PPF Group, is planning to build an office complex in Prague, Czech Republic.The project involves the construction of a 71,980m2 two office complex comprising a 139m, 32-story tower and a 128m, 29-story tower. It includes the construction of meeting rooms, conference rooms, cafeterias, 900-car parking facility and related facilities, and the installation of safety and security systems.CMC Architects, AS has been appointed as architect.On October 20, 2011, City Council of Prague (CCoF) commenced a screening procedure in the assessment of environmental impact statement for the project.On April 20, 2012, Department of Environmental Protection presented the conclusion of the environmental impact report.Stakeholder Information:Planning Authority: City Council of PragueArchitect: CMC Architects, AS Read More
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset from unpublished study examining memory and perception metacognitive processing in young and older adults. This dataset includes trial-by-trial behavioural measures (reaction time, accuracy, confidence ratings, metacognitive adequacy) and EEG data (32-channel recordings) from 39 participants performing memory and perception tasks. Data is organised in two MATLAB tables: behav_data_Memory and behav_data_Percept. Reviewers can access these data files to verify analyses described in our manuscript, with analysis scripts available in our OSF repository (https://osf.io/djk5e/?view_only=c56779caeece4279872318788fddb57a). The trial-wise EEG data is stored as 32 channel×45 frequency ×151 sample points arrays within each table row.
Facebook
TwitterThe human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain.
To recreate the result figures from the paper, see the README in the code directory.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The data that support the findings of this study are available from the corresponding author upon reasonable request and approval of the HR consultancy firm the data were obtained from. The Mplus code for the CFA and multilevel analyses is available at: https://osf.io/6f47s/
This study draws from brand positioning research to introduce the notions of points-of-relevance and points-of-difference to employer image research. Similar to prior research, this means that we start by investigating the relevant image attributes (points-of-relevance) that potential applicants use for judging organizations' attractiveness as an employer. However, we go beyond past research by examining whether the same points-of-relevance are used within and across industries. Next, we further extend current research by identifying which of the relevant image attributes also serve as points-of-difference for distinguishing between organizations and industries. The sample consisted of 24 organizations from 6 industries (total N = 7171). As a first key result, across industries and organizations, individuals attached similar importance to the same instrumental (job content, working conditions, and compensation) and symbolic (innovativeness, gentleness, and competence) image attributes in judging organizational attractiveness. Second, organizations and industries varied significantly on both instrumental and symbolic image attributes, with job content and innovativeness emerging as the strongest points-of-difference. Third, most image attributes showed greater variation between industries than between organizations, pointing at the importance of studying employer image at the industry level. Implications for recruitment research, employer branding, and best employer competitions are discussed.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the data, analysis code, and appendix for the paper "Reinforcement learning for proposing smoking cessation activities that build competencies: Combining two worldviews in a virtual coach" by Nele Albers, Mark A. Neerincx, and Willem-Paul Brinkman.
The data and large parts of the analysis code have previously been published in the repository for the PhD thesis chapter by Nele Albers that can be found via this DOI: https://doi.org/10.4121/9c4d9c35-3330-4536-ab8d-d5bb237c277d. The data is unchanged, but some additional analyses have been added (e.g., reporting the characteristics of the participants of the repertory grid study with smokers, repeating the analysis for AQ1 for different discount factors) in this repository compared to the one for the PhD thesis chapter.
Study
The paper is based on data collected in three studies.
Study 1
We conducted this study on the online crowdsourcing platform Prolific between 6 September and 16 November 2022. The Human Research Ethics Committee of Delft University of Technology granted ethical approval for the research (Letter of Approval number: 2338).
In this study, daily smokers who were contemplating or preparing to quit smoking first filled in a prescreening questionnaire and were then invited to a repertory grid study if they passed the prescreening. In the repertory grid study, participants were asked to divide sets of 3 preparatory activities for quitting smoking into two subgroups. Afterward, they rated all preparatory activities on the labels given to the subgroups.
Participants also rated all preparatory activities on the perceived ease of doing them and the perceived required time to do them. This data can be found in this repository: https://doi.org/10.4121/5198f299-9c7a-40f8-8206-c18df93ee2a0.
The study was pre-registered in the Open Science Framework (OSF): https://osf.io/cax6f.
Study 2
We performed a second repertory grid study with smoking cessation experts between September and October 2022. These smoking cessation experts were also asked to divide sets of 3 preparatory activities for quitting smoking into two subgroups based on the question “When it comes to competencies for quitting smoking that smokers build by doing the activities, how are two activities alike in some way but different from the third activity?”
The study was pre-registered in OSF together with the repertory grid study with smokers: https://osf.io/cax6f. The same ethical approval also applies.
Study 3
We conducted a third study on the online crowdsourcing platform Prolific. In this study, daily smokers interacted with the conversational agent Mel in up to five conversational sessions between 21 July and 27 August 2023. The Human Research Ethics Committee of Delft University of Technology granted ethical approval for the research (Letter of Approval number: 2939) on 31 March 2023.
In each session, participants were assigned a new activity for quitting smoking: one of 44 preparatory activities or one of 9 persuasive activities. 682 people started the first session and 349 people completed session 5.
The study was pre-registered in OSF: https://doi.org/10.17605/OSF.IO/NUY4W.
The implementation of the conversational agent Mel is available online: https://doi.org/10.5281/zenodo.8302492.
Data
We provide data on the 3 studies:
-Data on study 1 (e.g., the subgroup labels and activity ratings provided by smokers). Additional data from study 1 not used in this paper can be found here: https://doi.org/10.4121/5198f299-9c7a-40f8-8206-c18df93ee2a0. To reproduce our reporting of the characteristics of the participants of this study, you will also need to download data from this other repository. More information on this is provided in Readme-files in this repository.
-Data on study 2 (e.g., the subgroup labels and activity ratings provided by experts, as well as the self-reported expertise of the experts).
-Data on study 3:
Analysis code
All our analyses are based on either R or Python. We provide code to allow them to be reproduced.
Appendix
We also provide the paper's Appendix, which includes, for example, the formulations of the 44 preparatory and 9 persuasive activities.
In the case of questions, please contact Nele Albers (n.albers@tudelft.nl) or Willem-Paul Brinkman (w.p.brinkman@tudelft.nl).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains fMRI data from adults from one paper, with two experiments in it:
Liu, S., Lydic, K., Mei, L., & Saxe, R. (in press, Imaging Neuroscience). Violations of physical and psychological expectations in the human adult brain. Preprint: https://doi.org/10.31234/osf.io/54x6b
All subjects who contributed data to this repository consented explicitly to share their de-faced brain images publicly on OpenNeuro. Experiment 1 has 16 subjects who gave consent to share (17 total), and Experiment 2 has 29 subjects who gave consent to share (32 total). Experiment 1 subjects have subject IDs starting with "SAXNES*", and Experiment 2 subjects have subject IDs starting with "SAXNES2*".
There are (anonymized) event files associated with each run, subject and task, and contrast files.
All event files, for all tasks, have the following cols: onset_time, duration, trial_type and response_time. Below are notes about subject-specific event files.
For the DOTS and VOE event files from Experiment 1, we have the additional columns:
experimentName ('DotsSocPhys' or 'VOESocPhys')correct: at the end of the trial, subs made a response. In DOTS, they indicated whether the dot that disappeared reappeared at a plausible location. In VOE, they pressed a button when the fixation appeared as a cross rather than a plus sign. This col indicates whether the sub responded correctly (1/0)stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/DOTS/xxxxFor the DOTS event files from Experiment 2, we have the additional columns:
participant: redundant with the file nameexperiment_name: name of the task, redundant with file nameblock_order: which order the dots trials happened in (1 or 2)prop_correct: the proportion of correct responses over the entire runFor the Motion event files from Experiment 2, we have the additional columns:
experiment_name: name of the task, redundant with file nameblock_order: which order the dots trials happened in (1 or 2)event: the index of the current event (0-22)For the spWM event files from Experiment 2, we have the additional columns:
experiment_name: name of the task, redundant with file nameparticipant: redundant with the file nameblock_order: which order the dots trials happened in (1 or 2)run_accuracy_hard: the proportion of correct responses for the hard trials in this runrun_accuracy_easy: the proportion of correct responses for the easy trials in this runFor the VOE event files from Experiment 2, we have the additional columns:
trial_type_specific: identifies trials at one more level of granularity, with respect to domain task and event (e.g. psychology_efficiency_unexp)trial_type_morespecific: similar to trial_type_specific but includes information about domain task scenario and event (e.g. psychology_efficiency_trial-15-over_unexp)experiment_name: name of the task, redundant with file nameparticipant: redundant with the file namecorrect: whether the response for this trial was correct (1, or 0)time_elapsed: how much time as elapsed by the end of this trial, in mstrial_n: the index of the current event correct_answer: what the correct answer was for the attention check (yes or no)subject_correct: whether the subject in fact was correct in their responseevent: fam, expected, or unexpectedidentical_tests: were the test events identical, for this trial?stim_ID: numerical string picking out each unique stimulusscenario_string: string identifying each scenario within each taskdomain: physics, psychology (psychology-action), both (psychology-environment)task: solidity, permanence, goal, efficiency, infer-constraints, or agent-solidityprop_correct:the proportion of correct responses over the entire runstim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/VOE/xxxx
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.
Additional processed data and analysis code are available in an OSF repository.
Details about the study are included here.
We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.
To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.
We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.
Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.
Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.
On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).
Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.
The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).
This repository contains three main folders, complying with BIDS specifications.
- Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
- Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory.
- Derivatives contain all additional whole brain analyses performed on this dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Upworthy Research Archive is an open dataset of thousands of A/B tests of headlines conducted by Upworthy from January 2013 to April 2015. This repository includes the full data from the archive.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project includes three datasets: the first dataset compiles dataset metadata commonalities that were identified from 48 Canadian restricted health data sources. The second dataset compiles access process metadata commonalities extracted from the same 48 data sources. The third dataset maps metadata commonalities of the first dataset to existing metadata standards including DataCite, DDI, DCAT, and DATS. This mapping exercise was completed to determine whether metadata used by restricted data sources aligned with existing standards for research data.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Experiment Details Human electroencephalography recordings from 50 subjects for 1,854 concepts and 22,248 images in the THINGS stimulus database. Images were presented in rapid serial visual presentation streams at 10Hz rates. Participants performed an orthogonal fixation colour change detection task.
Experiment length: 1 hour
More information:
https://osf.io/hd6zk/ (osf repository)
https://doi.org/10.1101/2021.06.03.447008 (preprint)
Facebook
TwitterData and variable key for Dunham, Dotsch, Clark, & Stepanova, "The development of White-Asian categorization: Contributions from skin color and other physiognomic cues"