40 datasets found
  1. Data repository

    • osf.io
    Updated Aug 9, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yarrow Dunham; Amy Rakei; Chen Fang; Abhishek Giri; Filip Verroens (2015). Data repository [Dataset]. https://osf.io/q5j8g
    Explore at:
    Dataset updated
    Aug 9, 2015
    Dataset provided by
    Center for Open Sciencehttps://cos.io/
    Authors
    Yarrow Dunham; Amy Rakei; Chen Fang; Abhishek Giri; Filip Verroens
    Description

    Data and variable key for Dunham, Dotsch, Clark, & Stepanova, "The development of White-Asian categorization: Contributions from skin color and other physiognomic cues"

  2. Social Vigilance Project Data Repository

    • zenodo.org
    bin, csv
    Updated Sep 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mustafa Yavuz; Mustafa Yavuz (2025). Social Vigilance Project Data Repository [Dataset]. http://doi.org/10.5281/zenodo.17177034
    Explore at:
    bin, csvAvailable download formats
    Dataset updated
    Sep 22, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mustafa Yavuz; Mustafa Yavuz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data repository includes preprocessed EEG files (.mat format, per subject and per experimental block), preprocessed pupillometry files (.mat format, per subject and per experimental block), both raw and preprocessed behavioral data (.mat and .csv format) and RSA results (.mat) collected for the following pre-registered project: https://doi.org/10.17605/OSF.IO/WYSBM

    Funding:

    MY, BB and JE were supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (819040 - acronym: rid-O). BB is also supported by the EMERGE EIC (Project 101070918). OD is supported by a Volkswagen Stiftung grant (Co-Sense) as well as the EMERGE EIC (Project 101070918).

  3. Z

    Final Dataset for the DIssemination of REgistered COVID-19 Clinical Trials...

    • data.niaid.nih.gov
    Updated Jul 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Salholz-Hillel, Maia; Pugh-Jones, Molly; Hildebrand, Nicole; Schult, Tjada A.; Schwietering, Johannes; Grabitz, Peter; Carlisle, Benjamin Gregory; Goldacre, Ben; Strech, Daniel; DeVito, Nicholas J. (2024). Final Dataset for the DIssemination of REgistered COVID-19 Clinical Trials (DIRECCT) Study [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8181414
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Bennett Institute for Applied Data Science, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
    QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charite Universitätsmedizin Berlin, Berlin, Germany
    Authors
    Salholz-Hillel, Maia; Pugh-Jones, Molly; Hildebrand, Nicole; Schult, Tjada A.; Schwietering, Johannes; Grabitz, Peter; Carlisle, Benjamin Gregory; Goldacre, Ben; Strech, Daniel; DeVito, Nicholas J.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The DIRECCT study is a multi-phase examination of clinical trial results dissemination during the COVID-19 pandemic.

    Interim data for trials completed during the first six months of the pandemic (i.e., 1 January 2020 – 30 June 2020) was previously deposited at https://doi.org/10.5281/zenodo.4669936. This data deposit comprises the results of searches for trials completed during the first 18-months of the pandemic (i.e., 1 January 2020 – 30 June 2021). The data structure for the final phase of the project is not identical to the interim data as it was substantially more complex. The data include datatables (CSVs) that can be treated as relational and joined on the id or trn columns. See datamodel.png for an overview of the data.

    Details on data sources and methods for the creation and analysis of this dataset are available in a detailed protocol (Version 3.1, 19 July 2023) : https://osf.io/w8t7r

    Note: This repository will be updated with additional information including a codebook and archives of raw data.

    Additional information on the project is available at the project's OSF page: https://doi.org/10.17605/osf.io/5f8j2.

  4. Data from: Test-Retest Reliability Analysis of Resting-state EEG Measures...

    • zenodo.org
    zip
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasios Ziogas; Anastasios Ziogas; Simon Ruch; Simon Ruch; Nicole Skieresz; Nicole Skieresz; Sandy, Carla Marca; Sandy, Carla Marca; Nicolas Rothen; Nicolas Rothen; Thomas Reber; Thomas Reber (2025). Test-Retest Reliability Analysis of Resting-state EEG Measures and Their Association with Long-Term Memory in Children and Adults [Dataset]. http://doi.org/10.5281/zenodo.15830022
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 8, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anastasios Ziogas; Anastasios Ziogas; Simon Ruch; Simon Ruch; Nicole Skieresz; Nicole Skieresz; Sandy, Carla Marca; Sandy, Carla Marca; Nicolas Rothen; Nicolas Rothen; Thomas Reber; Thomas Reber
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jul 4, 2025
    Description

    This repository provides the data for the manuscript entitled "Test-Retest Reliability Analysis of Resting-state EEG Measures and Their Association with Long-Term Memory in Children and Adults".

    The relevant metadata and descriptors for the data as well as the code for analyzing the data are provided in the accompanying OSF repository:

    • Ziogas, A., Ruch, S., Skieresz, N., Marca, S. C., Rothen, N., & Reber, T. P. (2025). Test-Retest Reliability Analysis of Resting-state EEG Measures and Their Association with Long-Term Memory in Children and Adults. https://doi.org/10.17605/OSF.IO/5ZNYK

    Description of files

    EEG_Files.zip:

    • Content: Raw resting state EEG of 36 children and 90 adults. Two recordings ("pre" and "post") per participants.
    • Data format: BrainVision Core Data Format ( *.vhdr, *.vmrk and,*.eeg)
    • Folder structure: The data of each subject is stored in a separate folder. The folder number (.../1/ to .../126/) serves as participant identifier. Each folder contains two resting state recordings the pre- and post- recording (..._pre.eeg, ..._post.eeg, ...)
      • Children: folders /1/ - /36/
      • Adults: folders /37/ - /126/

    OutputPreprocPRE.zip / OutputPreprocPOST.zip

    • These folders contain the preprocessed EEG data in the data format of the MATLAB toolbox "eeglab" (https://eeglab.org/; file types: *.set and *.fdt;). The preprocessing pipeline is available in the accompanying OSF repository (see above).
    • The folders further contain the key EEG parameters that were extracted for the manuscript. The code for reproducing these parameters is available in the accompanying OSF repository (see above).

  5. d

    Facilities Usage on the International Space Station

    • doi.org
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Justin Walsh; Rao Ali; Ryan Shihabi; Erik Linstead (2024). Facilities Usage on the International Space Station [Dataset]. http://doi.org/10.31219/osf.io/d9k4w
    Explore at:
    Dataset updated
    Dec 2, 2024
    Dataset provided by
    Center For Open Science
    Authors
    Justin Walsh; Rao Ali; Ryan Shihabi; Erik Linstead
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Science experiments on the International Space Station (ISS) - and in future space habitats - require “facilities,” as they are known to the space agencies, as the core building blocks of the research capabilities they provide. Facilities are pieces of equipment that enable research, such as centrifuges, furnaces, gloveboxes, freezers, and more. The ISS has been home to 191 such facilities in its 24 years of continuous operation, but until now, there has been no public accounting or modeling of how they are used. Here we present a statistical analysis of ISS facility usage, based on data scraped from more than 4,000 ISS daily reports dating from 2009 to 2024. The results are presented by facility and by facility category (as designated by NASA and as designated by us), and for both individual and pairwise use. By drawing this picture of research activity on the first permanently crewed space habitat in low Earth orbit, we provide insights that are useful for the design and planning of the next generation of space stations, including the co-location of facilities that tend to be used together as part of a broad space-based infrastructure. Raw data and code for replication and further analysis is available in this paper’s Github repository.

  6. Data from: Category-biased neural representations form spontaneously during...

    • openneuro.org
    Updated Oct 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S.R. Ashby; D. Zeithamova (2021). Category-biased neural representations form spontaneously during learning that emphasizes memory for specific instances [Dataset]. http://doi.org/10.18112/openneuro.ds003851.v1.0.0
    Explore at:
    Dataset updated
    Oct 20, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    S.R. Ashby; D. Zeithamova
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Title: Category-biased neural representations form spontaneously during learning that emphasizes memory for specific instances.

    fMRI task with four runs of a paired-associates style learning task. Participants were presented with blended face stimuli (nine training faces total) and were instructed to learn the first and last name of the faces. First names were unique to each face but last names were shared across three family categories (Miller, Wilson, and Davis) that were determined via the blending procedure.

    Task order: train_run-1, train_run-2, train_run-3, train_run-4

    Information related to the task for each trial (onsets, duration of stimulus on the screen, category-relevant and category-irrelevant faces that constitute each blend) are included in a _events.tsv file for each subject and each run of the training task.

    Stimuli and other behavioral data can be accessed via OSF repository Blended-face Similarity Ratings and Categorization Tasks: https://osf.io/e8htb/

  7. Z

    Data from: Cross-modal interactions at the audiovisual cocktail-party...

    • data.niaid.nih.gov
    Updated May 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Klatt, Laura-Isabelle; Begau, Alexandra; Schneider, Daniel; Wascher, Edmund; Getzmann, Stephan (2023). Cross-modal interactions at the audiovisual cocktail-party revealed by behavior, ERPs, and neural oscillations [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7885589
    Explore at:
    Dataset updated
    May 9, 2023
    Authors
    Klatt, Laura-Isabelle; Begau, Alexandra; Schneider, Daniel; Wascher, Edmund; Getzmann, Stephan
    Description

    This repository contains the raw EEG data sets associated with the following publication:

    Klatt, L. I., Begau, A., Schneider, D., Wascher, E., & Getzmann, S. (2023). Cross-modal interactions at the audiovisual cocktail-party revealed by behavior, ERPs, and neural oscillations. NeuroImage, 271, 120022. https://doi.org/10.1016/j.neuroimage.2023.120022

    The associated OSF repository, containing the aggregated behavioral and EEG data files (e.g., mean RTs, mean N2ac amplitudes, mean N2pc amplitudes) as well as analysis scripts and JASP files can be found here: https://osf.io/asx5v/

    Please see the README file in the OSF repository for further information regarding the shared datasets (e.g., the difference between "coded" and "coded_wo_doubleAK" datasets). The data are shared under the following licence: CC BY-NC-SA 4.0 (https://creativecommons.org/licenses/by-nc-sa/4.0/).

    To obtain access to this Zenodo repository, a data user agreement must be signed.

  8. R

    Data from: Clustering of Adolescents’ Health Behaviors Before and During the...

    • data.ru.nl
    Updated Nov 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nina van den Broek; Junilla Larsen; Jacqueline Vink (2025). Clustering of Adolescents’ Health Behaviors Before and During the COVID-19 Pandemic: Examining Transitions and The Role of Demographics and Parental Health Behaviors [Dataset]. http://doi.org/10.34973/eekd-9v39
    Explore at:
    (557862 bytes)Available download formats
    Dataset updated
    Nov 14, 2025
    Dataset provided by
    Radboud University
    Authors
    Nina van den Broek; Junilla Larsen; Jacqueline Vink
    Description

    This collection contains the analytical datasets as reported in the manuscript “Clustering of Adolescents’ Health Behaviors Before and During the COVID-19 Pandemic: Examining Transitions and The Role of Demographics and Parental Health Behaviors”. In total, four different datasets were used in this manuscript for different steps in the analyses that were performed in R and Mplus. The corresponding analytical codes in R and Mplus are publicly available through the Open Science Framework (https://osf.io/8yznm/). For more information on all datasets included in this collection, please see the readme file and the Excel codebook. Data from the "G(F)OOD together" research project were used (https://osf.io/bysgq/), a longitudinal multi-wave study on adolescents' and their parents' (mental) health behaviors collected before and during the COVID-19 pandemic.

  9. U

    Datasets and R code for paper entitled "Quantifying the Value of Carbon...

    • researchdata.bath.ac.uk
    Updated Apr 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neal Hinvest; Yu Shuang Gan (2024). Datasets and R code for paper entitled "Quantifying the Value of Carbon Label Information in Food Choice using Drift Diffusion Modelling" [Dataset]. http://doi.org/10.17605/OSF.IO/7QHBW
    Explore at:
    Dataset updated
    Apr 21, 2024
    Dataset provided by
    University of Bath
    Authors
    Neal Hinvest; Yu Shuang Gan
    Dataset funded by
    University of Bath
    Description

    These datasets are generated from a choice experiment, a survey questionnaire, and output from a computational modelling (on R) of these data. The OSF repository also includes all R codes and study materials.

  10. 4

    User experiences with digital future-self interventions in the contexts of...

    • data.4tu.nl
    zip
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kristell M. Penfornis; Nele Albers (2025). User experiences with digital future-self interventions in the contexts of smoking and physical activity: A mixed methods multi-study exploration - Data and analysis code [Dataset]. http://doi.org/10.4121/951b2dc4-a59e-48ed-9856-af484b125393.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 15, 2025
    Dataset provided by
    4TU.ResearchData
    Authors
    Kristell M. Penfornis; Nele Albers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2017 - 2024
    Description

    This repository contains data and analysis code underlying the paper "User experiences with digital future-self interventions in the contexts of smoking and physical activity: A mixed methods multi-study exploration" by Kristell M. Penfornis, Nele Albers, Willem-Paul Brinkman, Mark A. Neerincx, Andrea W.M. Evers, Winifred A. Gebhardt, and Eline Meijer. This includes 1) the data underlying the analysis for study 2, 2) scripts for data preprocessing and for computing the mean effort per activity for study 2, and 3) the SPSS syntaxes used for the analyses for all three studies.


    Data underlying analysis for study 2

    The folder "Study_2/Raw_Data" contains the data underlying the analysis for study 2. This includes:

    1. Data from participants' Prolific profiles (e.g., age, gender).
    2. Data from the prescreening questionnaire (e.g., smoking frequency).
    3. Data from the conversational sessions with the text-based virtual coach Kai (e.g., effort spent on activities).

    This data is derived from the data published in this repository: https://doi.org/10.4121/1d9aa8eb-9e63-4bf5-98a3-f359dbc932a4.


    Scripts for data preprocessing and mean effort computation for study 2

    The folder "Study_2" further contains scripts for data preprocessing and for computing the mean effort per activity for study 2.


    SPSS syntaxes

    The folder "Analysis_Syntax_SPSS" contains the SPSS syntaxes used for the analyses for all three studies.


    Additional resources

    1. The implementation of the text-based virtual coach Kai used in study 2 can be found here: https://doi.org/10.5281/zenodo.11102861.
    2. The preregistration of the study used to collected the data for study 2 in the Open Science Framework (OSF) can be found here: https://doi.org/10.17605/OSF.IO/78CNR.

  11. OSF – Prague Eye Towers – Praha

    • store.globaldata.com
    Updated May 22, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GlobalData UK Ltd. (2017). OSF – Prague Eye Towers – Praha [Dataset]. https://store.globaldata.com/report/osf-prague-eye-towers-praha/
    Explore at:
    Dataset updated
    May 22, 2017
    Dataset provided by
    GlobalDatahttps://www.globaldata.com/
    Authors
    GlobalData UK Ltd.
    License

    https://www.globaldata.com/privacy-policy/https://www.globaldata.com/privacy-policy/

    Time period covered
    2017 - 2021
    Area covered
    Czech Republic, Prague
    Description

    Office Star Five, spol sro (OSF), a member of PPF Group, is planning to build an office complex in Prague, Czech Republic.The project involves the construction of a 71,980m2 two office complex comprising a 139m, 32-story tower and a 128m, 29-story tower. It includes the construction of meeting rooms, conference rooms, cafeterias, 900-car parking facility and related facilities, and the installation of safety and security systems.CMC Architects, AS has been appointed as architect.On October 20, 2011, City Council of Prague (CCoF) commenced a screening procedure in the assessment of environmental impact statement for the project.On April 20, 2012, Department of Environmental Protection presented the conclusion of the environmental impact report.Stakeholder Information:Planning Authority: City Council of PragueArchitect: CMC Architects, AS Read More

  12. Behavioural and EEG data tables for Perception and Memory conditions

    • figshare.com
    bin
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Pace (2025). Behavioural and EEG data tables for Perception and Memory conditions [Dataset]. http://doi.org/10.6084/m9.figshare.28682165.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Thomas Pace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset from unpublished study examining memory and perception metacognitive processing in young and older adults. This dataset includes trial-by-trial behavioural measures (reaction time, accuracy, confidence ratings, metacognitive adequacy) and EEG data (32-channel recordings) from 39 participants performing memory and perception tasks. Data is organised in two MATLAB tables: behav_data_Memory and behav_data_Percept. Reviewers can access these data files to verify analyses described in our manuscript, with analysis scripts available in our OSF repository (https://osf.io/djk5e/?view_only=c56779caeece4279872318788fddb57a). The trial-wise EEG data is stored as 32 channel×45 frequency ×151 sample points arrays within each table row.

  13. Data from: Unique Contributions of Perceptual and Conceptual Humanness to...

    • researchdata.edu.au
    Updated Jun 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Varlet Manuel; Grootswagers Tijl; Tijl Grootswagers; Manuel Varlet (2023). Unique Contributions of Perceptual and Conceptual Humanness to Object Representations in the Human Brain dataset [Dataset]. http://doi.org/10.17605/OSF.IO/3ED8F
    Explore at:
    Dataset updated
    Jun 29, 2023
    Dataset provided by
    Western Sydney Universityhttp://www.uws.edu.au/
    OSF
    Authors
    Varlet Manuel; Grootswagers Tijl; Tijl Grootswagers; Manuel Varlet
    Description

    The human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain.

    • We used a previously published stimulus set and corresponding EEG data, obtainable from https://osf.io/a7knv/.
    • This repository contains the additional data and analysis code for the current study.

    To recreate the result figures from the paper, see the README in the code directory.

  14. s

    Data from: Employer image within and across industries: Moving beyond...

    • researchdata.smu.edu.sg
    • datasetcatalog.nlm.nih.gov
    txt
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Filip Rene O LIEVENS; Greet VAN HOYE; Saartje CROMHEECKE; Bert WEIJTERS (2023). Data from: Employer image within and across industries: Moving beyond assessing points-of-relevance to identifying points-of-difference [Dataset]. http://doi.org/10.25440/smu.21731504.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SMU Research Data Repository (RDR)
    Authors
    Filip Rene O LIEVENS; Greet VAN HOYE; Saartje CROMHEECKE; Bert WEIJTERS
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    The data that support the findings of this study are available from the corresponding author upon reasonable request and approval of the HR consultancy firm the data were obtained from. The Mplus code for the CFA and multilevel analyses is available at: https://osf.io/6f47s/

    This study draws from brand positioning research to introduce the notions of points-of-relevance and points-of-difference to employer image research. Similar to prior research, this means that we start by investigating the relevant image attributes (points-of-relevance) that potential applicants use for judging organizations' attractiveness as an employer. However, we go beyond past research by examining whether the same points-of-relevance are used within and across industries. Next, we further extend current research by identifying which of the relevant image attributes also serve as points-of-difference for distinguishing between organizations and industries. The sample consisted of 24 organizations from 6 industries (total N = 7171). As a first key result, across industries and organizations, individuals attached similar importance to the same instrumental (job content, working conditions, and compensation) and symbolic (innovativeness, gentleness, and competence) image attributes in judging organizational attractiveness. Second, organizations and industries varied significantly on both instrumental and symbolic image attributes, with job content and innovativeness emerging as the strongest points-of-difference. Third, most image attributes showed greater variation between industries than between organizations, pointing at the importance of studying employer image at the industry level. Implications for recruitment research, employer branding, and best employer competitions are discussed.

  15. 4

    Reinforcement learning for proposing smoking cessation activities that build...

    • data.4tu.nl
    zip
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nele Albers; Mark Neerincx; Willem-Paul Brinkman (2025). Reinforcement learning for proposing smoking cessation activities that build competencies: Combining two worldviews in a virtual coach - Data, analysis code, and appendix [Dataset]. http://doi.org/10.4121/261e07af-676b-4121-af6d-093954bf639b.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 29, 2025
    Dataset provided by
    4TU.ResearchData
    Authors
    Nele Albers; Mark Neerincx; Willem-Paul Brinkman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2022 - 2023
    Description

    This repository contains the data, analysis code, and appendix for the paper "Reinforcement learning for proposing smoking cessation activities that build competencies: Combining two worldviews in a virtual coach" by Nele Albers, Mark A. Neerincx, and Willem-Paul Brinkman.


    The data and large parts of the analysis code have previously been published in the repository for the PhD thesis chapter by Nele Albers that can be found via this DOI: https://doi.org/10.4121/9c4d9c35-3330-4536-ab8d-d5bb237c277d. The data is unchanged, but some additional analyses have been added (e.g., reporting the characteristics of the participants of the repertory grid study with smokers, repeating the analysis for AQ1 for different discount factors) in this repository compared to the one for the PhD thesis chapter.


    Study

    The paper is based on data collected in three studies.


    Study 1

    We conducted this study on the online crowdsourcing platform Prolific between 6 September and 16 November 2022. The Human Research Ethics Committee of Delft University of Technology granted ethical approval for the research (Letter of Approval number: 2338).


    In this study, daily smokers who were contemplating or preparing to quit smoking first filled in a prescreening questionnaire and were then invited to a repertory grid study if they passed the prescreening. In the repertory grid study, participants were asked to divide sets of 3 preparatory activities for quitting smoking into two subgroups. Afterward, they rated all preparatory activities on the labels given to the subgroups.


    Participants also rated all preparatory activities on the perceived ease of doing them and the perceived required time to do them. This data can be found in this repository: https://doi.org/10.4121/5198f299-9c7a-40f8-8206-c18df93ee2a0.


    The study was pre-registered in the Open Science Framework (OSF): https://osf.io/cax6f.


    Study 2

    We performed a second repertory grid study with smoking cessation experts between September and October 2022. These smoking cessation experts were also asked to divide sets of 3 preparatory activities for quitting smoking into two subgroups based on the question “When it comes to competencies for quitting smoking that smokers build by doing the activities, how are two activities alike in some way but different from the third activity?”


    The study was pre-registered in OSF together with the repertory grid study with smokers: https://osf.io/cax6f. The same ethical approval also applies.


    Study 3

    We conducted a third study on the online crowdsourcing platform Prolific. In this study, daily smokers interacted with the conversational agent Mel in up to five conversational sessions between 21 July and 27 August 2023. The Human Research Ethics Committee of Delft University of Technology granted ethical approval for the research (Letter of Approval number: 2939) on 31 March 2023.


    In each session, participants were assigned a new activity for quitting smoking: one of 44 preparatory activities or one of 9 persuasive activities. 682 people started the first session and 349 people completed session 5.


    The study was pre-registered in OSF: https://doi.org/10.17605/OSF.IO/NUY4W.


    The implementation of the conversational agent Mel is available online: https://doi.org/10.5281/zenodo.8302492.


    Data

    We provide data on the 3 studies:

    -Data on study 1 (e.g., the subgroup labels and activity ratings provided by smokers). Additional data from study 1 not used in this paper can be found here: https://doi.org/10.4121/5198f299-9c7a-40f8-8206-c18df93ee2a0. To reproduce our reporting of the characteristics of the participants of this study, you will also need to download data from this other repository. More information on this is provided in Readme-files in this repository.

    -Data on study 2 (e.g., the subgroup labels and activity ratings provided by experts, as well as the self-reported expertise of the experts).

    -Data on study 3:

    1. Data from participants' Prolific profiles (e.g., age, gender)
    2. Data from the prescreening questionnaire (e.g., smoking frequency, quitter self-identity)
    3. Data from the conversational sessions with Mel (e.g., effort spent on activities)
    4. Data from the post-questionnaire (e.g., smoking frequency, quitter self-identity)
    5. Data from the follow-up questionnaire (e.g., smoking frequency, quitter self-identity, weekly exercise amount)
    6. The variable "rand_id" is a random participant identifier and can be used to link data from different data files.


    Analysis code

    All our analyses are based on either R or Python. We provide code to allow them to be reproduced.


    Appendix

    We also provide the paper's Appendix, which includes, for example, the formulations of the 44 preparatory and 9 persuasive activities.



    In the case of questions, please contact Nele Albers (n.albers@tudelft.nl) or Willem-Paul Brinkman (w.p.brinkman@tudelft.nl).

  16. fMRI dataset: Violations of psychological and physical expectations in human...

    • openneuro.org
    Updated Jan 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shari Liu; Kirsten Lydic; Lingjie Mei; Rebecca Saxe (2024). fMRI dataset: Violations of psychological and physical expectations in human adult brains [Dataset]. http://doi.org/10.18112/openneuro.ds004934.v1.0.0
    Explore at:
    Dataset updated
    Jan 17, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shari Liu; Kirsten Lydic; Lingjie Mei; Rebecca Saxe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset description

    This dataset contains fMRI data from adults from one paper, with two experiments in it:

    Liu, S., Lydic, K., Mei, L., & Saxe, R. (in press, Imaging Neuroscience). Violations of physical and psychological expectations in the human adult brain. Preprint: https://doi.org/10.31234/osf.io/54x6b

    All subjects who contributed data to this repository consented explicitly to share their de-faced brain images publicly on OpenNeuro. Experiment 1 has 16 subjects who gave consent to share (17 total), and Experiment 2 has 29 subjects who gave consent to share (32 total). Experiment 1 subjects have subject IDs starting with "SAXNES*", and Experiment 2 subjects have subject IDs starting with "SAXNES2*".

    • code/ contains contrast files used in published work
    • sub-SAXNES*/ contains anatomical and functional images, and event files for each functional image. Event files contains the onset, duration, and condition labels
    • CHANGES will be logged in this file

    Tasks

    • VOE (Experiment 1 version): Novel task using hand-crafted stimuli from developmental psychology, showing violations of object solidity and support, and violations of goal-directed and efficient action. There were only 4 sets of stimuli in this experiment, that repeated across runs. Shown in mini-blocks of familization + two test events.
    • VOE (Experiment 2 version): Novel task including all stimuli from Experiment 1 except for support, showing violations of object permanence and continuity (from ADEPT dataset; Smith et al. 2019) and violations of goal-directed and efficient action (from AGENT dataset; Shu et al. 2021). Shown in pairs of familiarization + one test event (either expected or unexpected). All subjects saw one set of stimuli in runs 1-2, and a second set of stimuli in runs 3-4. If someone saw an expected outcome from a scenario in one run, they saw the unexpected outcome from the same scenario in the other run.
    • DOTS (2 runs, both Exp 1-2): Task contrasting social and physical interaction (Fischer et al. 2016, PNAS). Designed to localize regions like the STS and SMG.
    • Motion: Task contrasting coherent and incoherent motion (Robertson et al. 2014, Brain). Designed to localize area MT.
    • spWM: Task contrasting a hard vs easy spatial working memory task (Fedorenko et al., 2013, PNAS). Designed to localize multiple demand regions.

    There are (anonymized) event files associated with each run, subject and task, and contrast files.

    Event files

    All event files, for all tasks, have the following cols: onset_time, duration, trial_type and response_time. Below are notes about subject-specific event files.

    • sub-SAXNES2s001: The original MotionLoc outputs list the first block, 10s into the experiment, as the first event. This was preceded by a 10s fixation. For s001, prior to updating the script to reflect this 10s lag, we had to do some estimation - we saw that on average, each block was 11.8s but there was usually a .05s delay, such that each block started ~11.85s after the previous one. Thus we calculated start times as 11.85 after the previous block. For the rest of the subjects, the outputs were not manipulated - we just added an event to the start of the run.
    • sub-SAXNES2s013: no event files for DOTS run2; event files use approximate timings instead based on inferred information about block order
    • sub-SAXNES2s018 (excluded from sample): no event files, because this subject stopped participating without having contributed a complete, low-motion run, for which it was clear they were following the instructions for the task
    • sub-SAXNES2s019: no time to do run2 of DOTS or Motion, so only 1 run for those two
    • sub-SAXNES2s023, the event files from spWM run 1 did not save during scanning. We use timings from the default settings of condition 1 but we do not have trial-level data from this person.

    For the DOTS and VOE event files from Experiment 1, we have the additional columns:

    • experimentName ('DotsSocPhys' or 'VOESocPhys')
    • correct: at the end of the trial, subs made a response. In DOTS, they indicated whether the dot that disappeared reappeared at a plausible location. In VOE, they pressed a button when the fixation appeared as a cross rather than a plus sign. This col indicates whether the sub responded correctly (1/0)
    • stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/DOTS/xxxx

    For the DOTS event files from Experiment 2, we have the additional columns:

    • participant: redundant with the file name
    • experiment_name: name of the task, redundant with file name
    • block_order: which order the dots trials happened in (1 or 2)
    • prop_correct: the proportion of correct responses over the entire run

    For the Motion event files from Experiment 2, we have the additional columns:

    • experiment_name: name of the task, redundant with file name
    • block_order: which order the dots trials happened in (1 or 2)
    • event: the index of the current event (0-22)

    For the spWM event files from Experiment 2, we have the additional columns:

    • experiment_name: name of the task, redundant with file name
    • participant: redundant with the file name
    • block_order: which order the dots trials happened in (1 or 2)
    • run_accuracy_hard: the proportion of correct responses for the hard trials in this run
    • run_accuracy_easy: the proportion of correct responses for the easy trials in this run

    For the VOE event files from Experiment 2, we have the additional columns:

    • trial_type_specific: identifies trials at one more level of granularity, with respect to domain task and event (e.g. psychology_efficiency_unexp)
    • trial_type_morespecific: similar to trial_type_specific but includes information about domain task scenario and event (e.g. psychology_efficiency_trial-15-over_unexp)
    • experiment_name: name of the task, redundant with file name
    • participant: redundant with the file name
    • correct: whether the response for this trial was correct (1, or 0)
    • time_elapsed: how much time as elapsed by the end of this trial, in ms
    • trial_n: the index of the current event
    • correct_answer: what the correct answer was for the attention check (yes or no)
    • subject_correct: whether the subject in fact was correct in their response
    • event: fam, expected, or unexpected
    • identical_tests: were the test events identical, for this trial?
    • stim_ID: numerical string picking out each unique stimulus
    • scenario_string: string identifying each scenario within each task
    • domain: physics, psychology (psychology-action), both (psychology-environment)
    • task: solidity, permanence, goal, efficiency, infer-constraints, or agent-solidity
    • prop_correct:the proportion of correct responses over the entire run
    • stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/VOE/xxxx

    Associated Links

  17. TODO: name of the dataset

    • openneuro.org
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TODO:; First1 Last1; First2 Last2; ... (2024). TODO: name of the dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005295.v1.0.1
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    TODO:; First1 Last1; First2 Last2; ...
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data for Reading reshapes stimulus selectivity in the visual word form area

    This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.

    Additional processed data and analysis code are available in an OSF repository.

    Details about the study are included here.

    Participants

    We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.

    To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.

    Equipment

    We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.

    Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.

    Tasks

    Main Task

    Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.

    On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).

    Localizer for visual category-selective ventral temporal regions

    Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.

    Localizer for language processing regions

    The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).

    Data organization

    This repository contains three main folders, complying with BIDS specifications. - Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
    - Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory. - Derivatives contain all additional whole brain analyses performed on this dataset.

  18. The Upworthy Research Archive

    • berd-platform.de
    Updated Jul 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    J. Nathan Matias; Kevin Munger; Marianne Aubin Le Quere; Charles Ebersole; J. Nathan Matias; Kevin Munger; Marianne Aubin Le Quere; Charles Ebersole (2025). The Upworthy Research Archive [Dataset]. http://doi.org/10.17605/osf.io/jd64p
    Explore at:
    Dataset updated
    Jul 31, 2025
    Dataset provided by
    Upworthyhttp://upworthy.com/
    Authors
    J. Nathan Matias; Kevin Munger; Marianne Aubin Le Quere; Charles Ebersole; J. Nathan Matias; Kevin Munger; Marianne Aubin Le Quere; Charles Ebersole
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Upworthy Research Archive is an open dataset of thousands of A/B tests of headlines conducted by Upworthy from January 2013 to April 2015. This repository includes the full data from the archive.

  19. c

    Datasets exploring metadata commonalities across restricted health data...

    • nrc-digital-repository.canada.ca
    • depot-numerique-cnrc.canada.ca
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Read, Kevin B.; Gibson, Grant; Leahey, Ambery; Peterson, Lynn; Rutley, Sarah; Shi, Julie; Smith, Victoria; Stathis, Kelly (2025). Datasets exploring metadata commonalities across restricted health data sources in Canada [Dataset]. http://doi.org/10.17605/OSF.IO/TXRVE
    Explore at:
    Dataset updated
    Nov 21, 2025
    Dataset provided by
    OSF
    Authors
    Read, Kevin B.; Gibson, Grant; Leahey, Ambery; Peterson, Lynn; Rutley, Sarah; Shi, Julie; Smith, Victoria; Stathis, Kelly
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Canada
    Description

    This project includes three datasets: the first dataset compiles dataset metadata commonalities that were identified from 48 Canadian restricted health data sources. The second dataset compiles access process metadata commonalities extracted from the same 48 data sources. The third dataset maps metadata commonalities of the first dataset to existing metadata standards including DataCite, DDI, DCAT, and DATS. This mapping exercise was completed to determine whether metadata used by restricted data sources aligned with existing standards for research data.

  20. Human electroencephalography recordings from 50 subjects for 22,248 images...

    • openneuro.org
    Updated Dec 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tijl Grootswagers; Ivy Zhou; Amanda Robinson; Martin Hebart; Thomas Carlson (2021). Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts [Dataset]. http://doi.org/10.18112/openneuro.ds003825.v1.1.1
    Explore at:
    Dataset updated
    Dec 3, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tijl Grootswagers; Ivy Zhou; Amanda Robinson; Martin Hebart; Thomas Carlson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Experiment Details Human electroencephalography recordings from 50 subjects for 1,854 concepts and 22,248 images in the THINGS stimulus database. Images were presented in rapid serial visual presentation streams at 10Hz rates. Participants performed an orthogonal fixation colour change detection task.

    Experiment length: 1 hour

    More information:

    https://osf.io/hd6zk/ (osf repository)

    https://doi.org/10.1101/2021.06.03.447008 (preprint)

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yarrow Dunham; Amy Rakei; Chen Fang; Abhishek Giri; Filip Verroens (2015). Data repository [Dataset]. https://osf.io/q5j8g
Organization logo

Data repository

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Aug 9, 2015
Dataset provided by
Center for Open Sciencehttps://cos.io/
Authors
Yarrow Dunham; Amy Rakei; Chen Fang; Abhishek Giri; Filip Verroens
Description

Data and variable key for Dunham, Dotsch, Clark, & Stepanova, "The development of White-Asian categorization: Contributions from skin color and other physiognomic cues"

Search
Clear search
Close search
Google apps
Main menu