16 datasets found
  1. c

    Datasets exploring metadata commonalities across restricted health data...

    • nrc-digital-repository.canada.ca
    • depot-numerique-cnrc.canada.ca
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Read, Kevin B.; Gibson, Grant; Leahey, Ambery; Peterson, Lynn; Rutley, Sarah; Shi, Julie; Smith, Victoria; Stathis, Kelly (2025). Datasets exploring metadata commonalities across restricted health data sources in Canada [Dataset]. http://doi.org/10.17605/OSF.IO/TXRVE
    Explore at:
    Dataset updated
    Nov 21, 2025
    Dataset provided by
    OSF
    Authors
    Read, Kevin B.; Gibson, Grant; Leahey, Ambery; Peterson, Lynn; Rutley, Sarah; Shi, Julie; Smith, Victoria; Stathis, Kelly
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Canada
    Description

    This project includes three datasets: the first dataset compiles dataset metadata commonalities that were identified from 48 Canadian restricted health data sources. The second dataset compiles access process metadata commonalities extracted from the same 48 data sources. The third dataset maps metadata commonalities of the first dataset to existing metadata standards including DataCite, DDI, DCAT, and DATS. This mapping exercise was completed to determine whether metadata used by restricted data sources aligned with existing standards for research data.

  2. Grid Frequency Data

    • kaggle.com
    zip
    Updated Oct 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harpreet (2022). Grid Frequency Data [Dataset]. https://www.kaggle.com/datasets/harpree/grid-frequency-data
    Explore at:
    zip(228651 bytes)Available download formats
    Dataset updated
    Oct 30, 2022
    Authors
    Harpreet
    Description

    About this dataset

    The data is made available by OSF (https://osf.io/by5hu/). This data set contains precisely time-stamped frequency data from several power grids around the world in one-second resolution. The data originally is available for different dates and times. The data is available for just one hour 11 AM - 12PM to make good comparisons assuming the standard demand curve for all countries, days and months.

    The raw data for each city is available on OSF (https://osf.io/by5hu/)

    Research Ideas

    Determine how huge are frequency deviations for different parts of the world which indicates their grid stability Identify characteristic of energy storage system in order to provide the primary frequency control to stabilize the grid

    Columns

    This dataset provides frequency deviation data (Actual frequency - Nominal Frequency (50 or 60Hz)). The data is presented in mHz. The data is made available for just one hour of the data which is 11AM -12PM for the following cities:

    Coulumn NameDescription
    Time StampFor one hour (11AM - 12PM)
    CAPFrequency deviation of Cape Town
    TEXFrequency deviation of College Station, Texas
    CANFrequency deviation of Las Palmas de Gran Canaria, Canary Islands
    LISFrequency deviation of Lisbon
    SALFrequency deviation of Salt Lake City, Utah
    STOFrequency deviation of Stockholm
    TALFrequency deviation of Tallinn
    VESFrequency deviation of Vestmanna
    REYFrequency deviation of Reykjavík
    LONFrequency deviation of London
    SICFrequency deviation of Sicily
    KKFrequency deviation of Krakau
    LAUFrequency deviation of Lauris
    SPLFrequency deviation of Split
    PETFrequency deviation of St. Petersburg

    Terms of Use

    https://github.com/CenterForOpenScience/cos.io/blob/master/TERMS_OF_USE.md

  3. fMRI dataset: Violations of psychological and physical expectations in human...

    • openneuro.org
    Updated Jan 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shari Liu; Kirsten Lydic; Lingjie Mei; Rebecca Saxe (2024). fMRI dataset: Violations of psychological and physical expectations in human adult brains [Dataset]. http://doi.org/10.18112/openneuro.ds004934.v1.0.0
    Explore at:
    Dataset updated
    Jan 17, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shari Liu; Kirsten Lydic; Lingjie Mei; Rebecca Saxe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset description

    This dataset contains fMRI data from adults from one paper, with two experiments in it:

    Liu, S., Lydic, K., Mei, L., & Saxe, R. (in press, Imaging Neuroscience). Violations of physical and psychological expectations in the human adult brain. Preprint: https://doi.org/10.31234/osf.io/54x6b

    All subjects who contributed data to this repository consented explicitly to share their de-faced brain images publicly on OpenNeuro. Experiment 1 has 16 subjects who gave consent to share (17 total), and Experiment 2 has 29 subjects who gave consent to share (32 total). Experiment 1 subjects have subject IDs starting with "SAXNES*", and Experiment 2 subjects have subject IDs starting with "SAXNES2*".

    • code/ contains contrast files used in published work
    • sub-SAXNES*/ contains anatomical and functional images, and event files for each functional image. Event files contains the onset, duration, and condition labels
    • CHANGES will be logged in this file

    Tasks

    • VOE (Experiment 1 version): Novel task using hand-crafted stimuli from developmental psychology, showing violations of object solidity and support, and violations of goal-directed and efficient action. There were only 4 sets of stimuli in this experiment, that repeated across runs. Shown in mini-blocks of familization + two test events.
    • VOE (Experiment 2 version): Novel task including all stimuli from Experiment 1 except for support, showing violations of object permanence and continuity (from ADEPT dataset; Smith et al. 2019) and violations of goal-directed and efficient action (from AGENT dataset; Shu et al. 2021). Shown in pairs of familiarization + one test event (either expected or unexpected). All subjects saw one set of stimuli in runs 1-2, and a second set of stimuli in runs 3-4. If someone saw an expected outcome from a scenario in one run, they saw the unexpected outcome from the same scenario in the other run.
    • DOTS (2 runs, both Exp 1-2): Task contrasting social and physical interaction (Fischer et al. 2016, PNAS). Designed to localize regions like the STS and SMG.
    • Motion: Task contrasting coherent and incoherent motion (Robertson et al. 2014, Brain). Designed to localize area MT.
    • spWM: Task contrasting a hard vs easy spatial working memory task (Fedorenko et al., 2013, PNAS). Designed to localize multiple demand regions.

    There are (anonymized) event files associated with each run, subject and task, and contrast files.

    Event files

    All event files, for all tasks, have the following cols: onset_time, duration, trial_type and response_time. Below are notes about subject-specific event files.

    • sub-SAXNES2s001: The original MotionLoc outputs list the first block, 10s into the experiment, as the first event. This was preceded by a 10s fixation. For s001, prior to updating the script to reflect this 10s lag, we had to do some estimation - we saw that on average, each block was 11.8s but there was usually a .05s delay, such that each block started ~11.85s after the previous one. Thus we calculated start times as 11.85 after the previous block. For the rest of the subjects, the outputs were not manipulated - we just added an event to the start of the run.
    • sub-SAXNES2s013: no event files for DOTS run2; event files use approximate timings instead based on inferred information about block order
    • sub-SAXNES2s018 (excluded from sample): no event files, because this subject stopped participating without having contributed a complete, low-motion run, for which it was clear they were following the instructions for the task
    • sub-SAXNES2s019: no time to do run2 of DOTS or Motion, so only 1 run for those two
    • sub-SAXNES2s023, the event files from spWM run 1 did not save during scanning. We use timings from the default settings of condition 1 but we do not have trial-level data from this person.

    For the DOTS and VOE event files from Experiment 1, we have the additional columns:

    • experimentName ('DotsSocPhys' or 'VOESocPhys')
    • correct: at the end of the trial, subs made a response. In DOTS, they indicated whether the dot that disappeared reappeared at a plausible location. In VOE, they pressed a button when the fixation appeared as a cross rather than a plus sign. This col indicates whether the sub responded correctly (1/0)
    • stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/DOTS/xxxx

    For the DOTS event files from Experiment 2, we have the additional columns:

    • participant: redundant with the file name
    • experiment_name: name of the task, redundant with file name
    • block_order: which order the dots trials happened in (1 or 2)
    • prop_correct: the proportion of correct responses over the entire run

    For the Motion event files from Experiment 2, we have the additional columns:

    • experiment_name: name of the task, redundant with file name
    • block_order: which order the dots trials happened in (1 or 2)
    • event: the index of the current event (0-22)

    For the spWM event files from Experiment 2, we have the additional columns:

    • experiment_name: name of the task, redundant with file name
    • participant: redundant with the file name
    • block_order: which order the dots trials happened in (1 or 2)
    • run_accuracy_hard: the proportion of correct responses for the hard trials in this run
    • run_accuracy_easy: the proportion of correct responses for the easy trials in this run

    For the VOE event files from Experiment 2, we have the additional columns:

    • trial_type_specific: identifies trials at one more level of granularity, with respect to domain task and event (e.g. psychology_efficiency_unexp)
    • trial_type_morespecific: similar to trial_type_specific but includes information about domain task scenario and event (e.g. psychology_efficiency_trial-15-over_unexp)
    • experiment_name: name of the task, redundant with file name
    • participant: redundant with the file name
    • correct: whether the response for this trial was correct (1, or 0)
    • time_elapsed: how much time as elapsed by the end of this trial, in ms
    • trial_n: the index of the current event
    • correct_answer: what the correct answer was for the attention check (yes or no)
    • subject_correct: whether the subject in fact was correct in their response
    • event: fam, expected, or unexpected
    • identical_tests: were the test events identical, for this trial?
    • stim_ID: numerical string picking out each unique stimulus
    • scenario_string: string identifying each scenario within each task
    • domain: physics, psychology (psychology-action), both (psychology-environment)
    • task: solidity, permanence, goal, efficiency, infer-constraints, or agent-solidity
    • prop_correct:the proportion of correct responses over the entire run
    • stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/VOE/xxxx

    Associated Links

  4. n

    Online Appendix and Cetacean Datasets for: The Occurrence Birth-Death...

    • data-staging.niaid.nih.gov
    • datadryad.org
    • +1more
    zip
    Updated Apr 25, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jérémy Andréoletti; Antoine Zwaans; Rachel Warnock; Gabriel Aguirre-Fernández; Joëlle Barido-Sottani; Ankit Gupta; Tanja Stadler; Marc Manceau (2022). Online Appendix and Cetacean Datasets for: The Occurrence Birth-Death Process for combined-evidence analysis in macroevolution and epidemiology [Dataset]. http://doi.org/10.5061/dryad.p8cz8w9rq
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 25, 2022
    Dataset provided by
    Iowa State University
    ETH Zurich
    Friedrich-Alexander-Universität Erlangen-Nürnberg
    University of Zurich
    Authors
    Jérémy Andréoletti; Antoine Zwaans; Rachel Warnock; Gabriel Aguirre-Fernández; Joëlle Barido-Sottani; Ankit Gupta; Tanja Stadler; Marc Manceau
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Phylodynamic models generally aim at jointly inferring phylogenetic relationships, model parameters, and more recently, the number of lineages through time, based on molecular sequence data. In the fields of epidemiology and macroevolution these models can be used to estimate, respectively, the past number of infected individuals (prevalence) or the past number of species (paleodiversity) through time. Recent years have seen the development of “total-evidence” analyses, which combine molecular and morphological data from extant and past sampled individuals in a unified Bayesian inference framework. Even sampled individuals characterized only by their sampling time, i.e. lacking morphological and molecular data, which we call occurrences, provide invaluable information to reconstruct the past number of lineages.

    Here, we present new methodological developments around the Fossilized Birth-Death Process enabling us to (i) incorporate occurrence data in the likelihood function; (ii) consider piecewise-constant birth, death and sampling rates; and (iii) reconstruct the past number of lineages, with or without knowledge of the underlying tree. We implement our method in the RevBayes software environment, enabling its use along with a large set of models of molecular and morphological evolution, and validate the inference workflow using simulations under a wide range of conditions.

    We finally illustrate our new implementation using two empirical datasets stemming from the fields of epidemiology and macroevolution. In epidemiology, we infer the prevalence of the COVID-19 outbreak on the Diamond Princess ship, by taking into account jointly the case count record (occurrences) along with viral sequences for a fraction of infected individuals. In macroevolution, we infer the diversity trajectory of cetaceans using molecular and morphological data from extant taxa, morphological data from fossils, as well as numerous fossil occurrences. The joint modeling of occurrences and trees holds the promise to further bridge the gap between between traditional epidemiology and pathogen genomics, as well as paleontology and molecular phylogenetics.

    Methods Online Appendix : available in the Related Works section Cetacean Datasets : [copied from the subsection Material and methods > Cetacean data analysis > Molecular, morphological and occurrence datasets of the main paper] The data can be subdivided in three parts: molecular, morphological, and occurrences. Datasets were collected and analysed separately and are stored on the Open Science Framework (https://osf.io) ([dataset] Aguirre-Fern´andez et al., 2020). Molecular data comes from Steeman et al. (2009), and comprises 6 mitochondrial and 9 nuclear genes, for 87 of the 89 accepted extant cetacean species. Morphological data was obtained from Churchill et al. (2018), the most recent version of a widely-used dataset first produced by Geisler and Sanders (2003). After merging 2 taxa that are now considered synonyms on the Paleobiology Database (PBDB) and removing 3 outgroups that would have violated our model’s assumptions, it now contains 327 variable morphological characters for 27 extant and 90 fossil taxa (mostly identified at the species level but 21 remain undescribed). In order to speed up the analysis we further excluded the undescribed specimens and reduced this dataset to the generic level by selecting the most complete specimen in each genera. Indeed, the computing cost increases quadratically with the maximum number of hidden lineages N, to the point of becoming the bottleneck in our MCMC when N > 100. Given that a mid-Miocene peak diversity between 100 and 220 species is expected (Quental and Marshall, 2010), with less than 100 observed lineages in our inferred tree at that time, N should therefore be about 150. Inferring instead the tree of cetacean genera allows us to reduce N to 70 hidden lineages. The final dataset thus contains 41 extant and 62 extinct genera. Occurrences come from the PBDB (data archive 9, M. D. Uhen) on May 11, 2020. The dataset initially consisted of all 4678 cetacean occurrences, but the cetacean fossil record is known to be subject to several biases (Uhen and Pyenson, 2007; Marx et al., 2016; Dominici et al., 2020). A detailed exploration (see Online Appendix E) of this occurrence dataset revealed several notable biases. First, an artefactual cluster of occurrences in very recent times, combined with other expected Pleistocene biases (Dominici et al., 2020), led us to remove all Late Pleistocene and Holocene occurrences. Second, we detected substantial variations in fossil recovery per time unit across lineages (see Fig. S10) resulting from oversampling of some species and localities, 295 possibly due to greater abundance or spatio-temporal biases (Dominici et al., 2020). This observation violates our assumption of identical fossil sampling rates among taxa during a given interval. In order to reduce this bias, we retained occurrences identified at the genus level and further aggregated all occurrences belonging to an identical genus found at the same geological formation. In the case of occurrences for which the geological formation was not specified, we used geoplate data combined with stratigraphic interval as a proxy for geological formation. This resulted in a total of 968 occurrences retained for the analysis.

  5. f

    Table_1_The Chinese customers and service staff interactive affective system...

    • frontiersin.figshare.com
    • figshare.com
    xlsx
    Updated May 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ping Liu; Yi Zhang; Ziyue Xiong; Ying Gao (2024). Table_1_The Chinese customers and service staff interactive affective system (CCSIAS): introduction to a multimodal stimulus dataset.XLSX [Dataset]. http://doi.org/10.3389/fpsyg.2024.1302253.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 3, 2024
    Dataset provided by
    Frontiers
    Authors
    Ping Liu; Yi Zhang; Ziyue Xiong; Ying Gao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To research the emotional interaction between customers and service staff, single-modal stimuli are being used to activate subjects’ emotions while multimodal emotion stimuli with better efficiency are often neglected. This study aims to construct a multimodal emotion stimuli database (CCSIAS) with video records of real work status of 29 service staff and audio clips of interactions between customers and service staff by setting up wide-angle cameras and searching in company’s Ocean Engine for 15 consecutive days. First, we developed a tool to assess the emotional statuses of customers and service staff in Study 1. Second, 40 Masters and PhD students were invited to assess the audio and video data to evaluate the emotional states of customers and service staff in Study 2, using the tools developed in Study 1. Third, 118 participants were recruited to test the results from Study 2 to ensure the stability of the derived data. The results showed that 139 sets of stable emotional audio & video data were constructed (26 sets were high, 59 sets were medium and 54 sets were low). The amount of emotional information is important for the effective activation of participants’ emotional states, and the degree of emotional activation of video data is significantly higher than that of the audio data. Overall, it was shown that the study of emotional interaction phenomena requires a multimodal dataset. The CCSIAS (https://osf.io/muc86/) can extend the depth and breadth of emotional interaction research and can be applied to different emotional states between customers and service staff activation in the fields of organizational behavior and psychology.

  6. m

    Dataset for The Relationship between Number Line Estimation and Mathematical...

    • figshare.mq.edu.au
    • researchdata.edu.au
    Updated May 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rebecca Bull; Carola Ruiz Hornblas; Saskia Kohnen (2023). Dataset for The Relationship between Number Line Estimation and Mathematical Reasoning: A Quantile Regression Approach [Dataset]. http://doi.org/10.25949/22757006.v1
    Explore at:
    Dataset updated
    May 23, 2023
    Dataset provided by
    Macquarie University
    Authors
    Rebecca Bull; Carola Ruiz Hornblas; Saskia Kohnen
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    The sample included in this dataset represents children who participated in a cross-sectional study, a smaller cohort of which was followed up as part of a longitudinal study reported elsewhere (Bull et al., 2021). In the original study, 347 children were recruited. As data was found to be likely missing completely at random (χ2 = 29.445, df = 24, p = .204, Little, 1998), listwise deletion was used, and 23 observations were deleted from the original dataset. This dataset includes three hundred and twenty-four participants that composed the final sample of this study (162 boys, Mage = 6.2 years, SDage = 0.3 years). Children in this sample were in their second year of kindergarten (i.e., the year before starting primary school) in Singapore. The dataset includes children's sociodemographic information (i.e., age and sex) and performance on different general cognitive and mathematical skills. Mathematical tasks: - Computer-based 0-10 and 0-100 number line task - Mathematical Reasoning and Numerical Operations subtests from the Wechsler Individual Achievement Test II (WIAT II). Though the Numerical operations subtest is not used in this study. General cognitive tasks: - Peabody Picture Vocabulary Test (Vocabulary) - Raven’s Progressive Matrices Test (Non-verbal reasoning) The variables included in this dataset are: Age = Child’s age (in months) Sex = Boy/Girl (parent reported; boy=1, girl=2) Ravens = Non-verbal reasoning (Raven’s Progressive Matrices test) Ppvt = Vocabulary raw score (Peabody Picture Vocabulary Test) Maths_reason = Mathematical reasoning raw score (Math Reasoning subtest from the Wechsler Individual Achievement Test II) Num_Ops = Numerical Operations raw score (Numerical Operations subtest from the Wechsler Individual Achievement Test II, not used in this study) NLE10 = 0-10 number line (Percent absolute error) NLE100 = 0-100 number line (Percent absolute error) This dataset overlaps with the dataset that is the basis for: Ruiz, C., Kohnen, S., & Bull, R. (2023) Number Line Estimation Patterns and Their Relationship with Mathematical Performance. Journal of Numerical Cognition. Advance Online Publication https://doi.org/10.23668/psycharchives.12698 That project’s corresponding OSF page can be found here: https://osf.io/jat5h/ and the dataset is stored under embargo here: https://doi.org/10.25949/22558528.v1

  7. MRI data from 20 adults in response to videos of dialogue and monologue from...

    • openneuro.org
    Updated Feb 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Halie Olson; Emily Chen; Kirsten Lydic; Rebecca Saxe (2023). MRI data from 20 adults in response to videos of dialogue and monologue from Sesame Street [Dataset]. http://doi.org/10.18112/openneuro.ds004467.v1.0.0
    Explore at:
    Dataset updated
    Feb 7, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Halie Olson; Emily Chen; Kirsten Lydic; Rebecca Saxe
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Experiment

    20 adult participants (18 participants consented to open data sharing and are included here) watched video clips from Sesame Street, in which the audio was played either forward or reversed. Code and stimuli descriptions shared here: https://osf.io/whsb7/. We also scanned participants on two localizer tasks.

    SS-BlockedLang Language Task (litshort) 2x2 block task design with four conditions: Forward Dialogue, Forward Monologue, Backward Dialogue, and Backward Monologue. Participants were asked to watch the 20-second videos and press a button on an in-scanner button box when they saw a still image of Elmo appear on the screen after each 20-second block. Participants completed 4 runs, each 6 min 18 sec long. Each run contained unique clips, and participants never saw a Forward and Backward version of the same clip. Each run contained 3 sets of 4 blocks, one of each condition (total of 12 blocks), with 22-second rest blocks after each set of 4 blocks. Forward and Backward versions of each clip were counterbalanced between participants (randomly assigned Set A or Set B). Run order was randomized for each participant.

    SS-IntDialog Language Task (litlong) 1–3-minute dialogue clips of Sesame Street in which one character’s audio stream was played Forward and the other was played Backward. Additional sounds in the video (e.g., blowing bubbles, a crash from something falling) were played forwards. Participants watched the videos and pressed a button on an in-scanner button box when they saw a still image of Elmo appear on the screen immediately after each block. Participants completed 2 runs, each approximately 8 min 52 sec long. Each run contained unique clips, and participants never saw a version of the same clip with the Forward/Backward streams reversed. Each run contained 3 clips, 1-3 minutes each, presented in the same order. Between each video, as well as at the beginning and end of the run, there was a 22-second fixation block. Versions of each clip with the opposite character Forward and Backward were counterbalanced between participants (randomly assigned Set A or Set B). 11 participants saw version A, and 9 participants saw version B (1 run from group A was excluded due to participant falling asleep, and one run from group B was excluded due to motion). Run order was randomized for each participant (random sequence 1-2).

    Auditory Language Localizer (langloc) We used a localizer task previously validated for identifying high-level language processing regions (Scott et al., 2017). Participants listened to Intact and Degraded 18-second blocks of speech. The Intact condition consisted of audio clips of spoken English (e.g., clips from interviews in which one person is speaking), and the Degraded condition consisted of acoustically degraded versions of these clips. Participants viewed a black dot on a white background during the task while passively listening to the auditory stimuli. 14-second fixation blocks (no sound) were present after every 4 speech blocks, as well as at the beginning and end of each run (5 fixation blocks per run). Participants completed two runs, each approximately 6 min 6 sec long. Each run contained 16 blocks of speech (8 intact, 8 degraded).

    Theory of Mind Localizer (tomloc) We used a task previously validated for identifying regions that are involved in ToM and social cognition (Dodell-Feder et al., 2011). Participants read short stories in two conditions: False Beliefs and False Photos. Stories in the False Beliefs condition described scenarios in which a character holds a false belief. Stories in the False Photos condition described outdated photographs and maps. Each story was displayed in white text on a black screen for 10 seconds, followed by a 4-second true/false question based on the story (which participants responded to via the in-scanner button box), followed by 12 seconds of a blank screen (rest). Each run contained 10 blocks. Participants completed two runs, each approximately 4 min 40 sec long.

  8. Religious Characteristics of States Dataset Project - Demographics v. 2.0...

    • thearda.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Association of Religion Data Archives, Religious Characteristics of States Dataset Project - Demographics v. 2.0 (RCS-Dem 2.0), COUNTRIES ONLY [Dataset]. http://doi.org/10.17605/OSF.IO/7SR4M
    Explore at:
    Dataset provided by
    Association of Religion Data Archives
    Dataset funded by
    Association of Religion Data Archives
    Description

    The RCS-Dem dataset reports estimates of religious demographics, both country by country and region by region. RCS was created to fulfill the unmet need for a dataset on the religious dimensions of countries of the world, with the state-year as the unit of observation. It covers 220 independent states, 26 selected substate entities, and 41 geographically separated dependencies, for every year from 2015 back to 1900 and often 1800 (more than 42,000 state-years). It estimates populations and percentages of adherents of 100 religious denominations including second level subdivisions within Christianity and Islam, along with several complex categories such as "Western Christianity." RCS is designed for easy merger with datasets of the Correlates of War and Polity projects, datasets by the United Nations, the Religion And State datasets by Jonathan Fox, and the ARDA national profiles.

  9. b

    Tobacco and electronic cigarette cues for smoking and vaping: an online...

    • data.bris.ac.uk
    Updated Dec 20, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Tobacco and electronic cigarette cues for smoking and vaping: an online experimental study - Datasets - data.bris [Dataset]. https://data.bris.ac.uk/data/dataset/299889i8ysm0d21dt3rz63nipg
    Explore at:
    Dataset updated
    Dec 20, 2019
    Description

    This online study examined the impact on smoking and vaping craving of exposure to smoking (i.e., tobacco cigarette), vaping (i.e., cigalike and tank system device), or neutral cues. Participants (n=1120 recruited, n=936 for analysis) included UK adult current or former smokers who either vaped or did not vape. They were randomised to view one of four cue videos. The primary outcome was urge to smoke; secondary outcomes were urge to vape, desire to smoke and vape, as well as intention to quit smoking or remain abstinent from smoking. We found no evidence that exposure to videos of smoking or vaping cued smoking urges, and no evidence of interaction effects between cue exposure and smoking and vaping status. The study highlights the potential limitations of using an online setting for assessing craving. The study protocol was preregistered on the Open Science Framework: https://osf.io/a6jpu/.PLEASE NOTE: Any values that are listed as NULL in the data sheet are either missing values or not applicable values.

  10. Somatosensory evoked potentials in the human spinal cord to mixed and...

    • openneuro.org
    Updated Jul 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Birgit Nierula; Tilman Stephani; Merve Kaptan; André Moruaux; Burkhard Maess; Gabriel Curio; Vadim V. Nikulin; Falk Eippert (2023). Somatosensory evoked potentials in the human spinal cord to mixed and sensory nerve stimulation [Dataset]. http://doi.org/10.18112/openneuro.ds004389.v1.0.0
    Explore at:
    Dataset updated
    Jul 6, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Birgit Nierula; Tilman Stephani; Merve Kaptan; André Moruaux; Burkhard Maess; Gabriel Curio; Vadim V. Nikulin; Falk Eippert
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Description

    This is a data set consisting of simultaneous electroencephalography (EEG), electrospinography (ESG), electroneurography (ENG), and electromyography (EMG) recordings from 26 participants. There were nine different recording conditions: i) resting state with eyes open, ii) mixed median nerve stimulation (arm nerve), iii) mixed tibial nerve stimulation (leg nerve), iv) sensory nerve stimulation of the index finger, v) sensory nerve stimulation of the middle finger, vi) simultaneous senory nerve stimulation of the index and middle finger, vii) sensory nerve stimulation to the first toe, viii) sensory nerve stimulation to the second toe, ix) simultaneous senory nerve stimulation to the first and second toe. For each participant, there is i) the simultaneous EEG-ESG-ENG-EMG-recording which also includes electrocardiographic and respiratory signals, ii) ESG electrode positions. For a detailed description please see the following article: XXX. This study was pre-registered on OSF: https://osf.io/mjdha.

    Citing this dataset

    Should you make use of this data set in any publication, please cite the following article: XXXX

    License

    This data set is made available under the Creative Commons CC0 license. For more information, see https://creativecommons.org/share-your-work/public-domain/cc0/

    Data set

    This data set is organized according to the Brain Imaging Data Structure specification. For more information on this data specification, see https://bids-specification.readthedocs.io/en/stable/ Each participant's data are in one subdirectory (e.g., 'sub-001'), which contains the raw data in eeglab format. Please note that the EEG channel Fz was referenced to i) the EEG reference (right mastoid, RM, channel name: Fz) and ii) the ESG reference (6th thoracic vertebra, TH6, channel name: Fz-TH6). Should you have any questions about this data set, please contact nierula@cbs.mpg.de or eippert@cbs.mpg.de.

  11. s

    Data from: Employer image within and across industries: Moving beyond...

    • researchdata.smu.edu.sg
    • datasetcatalog.nlm.nih.gov
    txt
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Filip Rene O LIEVENS; Greet VAN HOYE; Saartje CROMHEECKE; Bert WEIJTERS (2023). Data from: Employer image within and across industries: Moving beyond assessing points-of-relevance to identifying points-of-difference [Dataset]. http://doi.org/10.25440/smu.21731504.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    SMU Research Data Repository (RDR)
    Authors
    Filip Rene O LIEVENS; Greet VAN HOYE; Saartje CROMHEECKE; Bert WEIJTERS
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    The data that support the findings of this study are available from the corresponding author upon reasonable request and approval of the HR consultancy firm the data were obtained from. The Mplus code for the CFA and multilevel analyses is available at: https://osf.io/6f47s/

    This study draws from brand positioning research to introduce the notions of points-of-relevance and points-of-difference to employer image research. Similar to prior research, this means that we start by investigating the relevant image attributes (points-of-relevance) that potential applicants use for judging organizations' attractiveness as an employer. However, we go beyond past research by examining whether the same points-of-relevance are used within and across industries. Next, we further extend current research by identifying which of the relevant image attributes also serve as points-of-difference for distinguishing between organizations and industries. The sample consisted of 24 organizations from 6 industries (total N = 7171). As a first key result, across industries and organizations, individuals attached similar importance to the same instrumental (job content, working conditions, and compensation) and symbolic (innovativeness, gentleness, and competence) image attributes in judging organizational attractiveness. Second, organizations and industries varied significantly on both instrumental and symbolic image attributes, with job content and innovativeness emerging as the strongest points-of-difference. Third, most image attributes showed greater variation between industries than between organizations, pointing at the importance of studying employer image at the industry level. Implications for recruitment research, employer branding, and best employer competitions are discussed.

  12. Dataset accompanying "Reading Reshapes Stimulus Selectivity in the Visual...

    • openneuro.org
    Updated Oct 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vassiki Chauhan; Krystal McCook; Alex White (2024). Dataset accompanying "Reading Reshapes Stimulus Selectivity in the Visual Word Form Area", Chauhan, McCook and White (2024) [Dataset]. http://doi.org/10.18112/openneuro.ds005295.v1.0.2
    Explore at:
    Dataset updated
    Oct 18, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Vassiki Chauhan; Krystal McCook; Alex White
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data for Reading reshapes stimulus selectivity in the visual word form area

    This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.

    Additional processed data and analysis code are available in an OSF repository.

    Details about the study are included here.

    Participants

    We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.

    To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.

    Equipment

    We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.

    Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.

    Tasks

    Main Task

    Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.

    On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).

    Localizer for visual category-selective ventral temporal regions

    Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.

    Localizer for language processing regions

    The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).

    Data organization

    This repository contains three main folders, complying with BIDS specifications. - Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
    - Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory. - Derivatives contain all additional whole brain analyses performed on this dataset.

  13. TODO: name of the dataset

    • openneuro.org
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TODO:; First1 Last1; First2 Last2; ... (2024). TODO: name of the dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005295.v1.0.1
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    TODO:; First1 Last1; First2 Last2; ...
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data for Reading reshapes stimulus selectivity in the visual word form area

    This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.

    Additional processed data and analysis code are available in an OSF repository.

    Details about the study are included here.

    Participants

    We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.

    To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.

    Equipment

    We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.

    Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.

    Tasks

    Main Task

    Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.

    On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).

    Localizer for visual category-selective ventral temporal regions

    Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.

    Localizer for language processing regions

    The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).

    Data organization

    This repository contains three main folders, complying with BIDS specifications. - Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
    - Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory. - Derivatives contain all additional whole brain analyses performed on this dataset.

  14. Data_Sheet_1_art.pics Database: An Open Access Database for Art Stimuli for...

    • frontiersin.figshare.com
    • datasetcatalog.nlm.nih.gov
    txt
    Updated Jun 3, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ronja Thieleking; Evelyn Medawar; Leonie Disch; A. Veronica Witte (2023). Data_Sheet_1_art.pics Database: An Open Access Database for Art Stimuli for Experimental Research.CSV [Dataset]. http://doi.org/10.3389/fpsyg.2020.576580.s001
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Ronja Thieleking; Evelyn Medawar; Leonie Disch; A. Veronica Witte
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    While art is omnipresent in human history, the neural mechanisms of how we perceive, value and differentiate art has only begun to be explored. Functional magnetic resonance imaging (fMRI) studies suggested that art acts as secondary reward, involving brain activity in the ventral striatum and prefrontal cortices similar to primary rewards such as food. However, potential similarities or unique characteristics of art-related neuroscience (or neuroesthetics) remain elusive, also because of a lack of adequate experimental tools: the available collections of art stimuli often lack standard image definitions and normative ratings. Therefore, we here provide a large set of well-characterized, novel art images for use as visual stimuli in psychological and neuroimaging research. The stimuli were created using a deep learning algorithm that applied different styles of popular paintings (based on artists such as Klimt or Hundertwasser) on ordinary animal, plant and object images which were drawn from established visual stimuli databases. The novel stimuli represent mundane items with artistic properties with proposed reduced dimensionality and complexity compared to paintings. In total, 2,332 novel stimuli are available open access as “art.pics” database at https://osf.io/BTWNQ/ with standard image characteristics that are comparable to other common visual stimuli material in terms of size, variable color distribution, complexity, intensity and valence, measured by image software analysis and by ratings derived from a human experimental validation study [n = 1,296 (684f), age 30.2 ± 8.8 y.o.]. The experimental validation study further showed that the art.pics elicit a broad and significantly different variation in subjective value ratings (i.e., liking and wanting) as well as in recognizability, arousal and valence across different art styles and categories. Researchers are encouraged to study the perception, processing and valuation of art images based on the art.pics database which also enables real reward remuneration of the rated stimuli (as art prints) and a direct comparison to other rewards from e.g., food or money.Key Messages: We provide an open access, validated and large set of novel stimuli (n = 2,332) of standardized art images including normative rating data to be used for experimental research. Reward remuneration in experimental settings can be easily implemented for the art.pics by e.g., handing out the stimuli to the participants (as print on premium paper or in a digital format), as done in the presented validation task. Experimental validation showed that the art.pics’ images elicit a broad and significantly different variation in subjective value ratings (i.e., liking, wanting) across different art styles and categories, while size, color and complexity characteristics remained comparable to other visual stimuli databases.

  15. Concert Twitter, Audience Reconstructed Supplimentary Materials

    • figshare.com
    pdf
    Updated Oct 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Finn Upham (2023). Concert Twitter, Audience Reconstructed Supplimentary Materials [Dataset]. http://doi.org/10.6084/m9.figshare.24260452.v4
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Oct 7, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Finn Upham
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary materials for analysis of twitter activity during BTS livestream concerts in 2021Compressed set of csv files for analysis of twitter activity during live streamed BTS concerts in 2021. Depersonalized tweet datasets, subset of tweets with content codes, and setlist timing for the four concerts. Descriptions of data preparation and analysis on github: https://github.com/finn42/Concert_Twt_OpenDescriptive pdf of supplimentary materials.Figures of secondary analysis of Corona Concert Survey data (https://osf.io/skg7h/)

  16. Dataset for study: DOI 10.17605/OSF.IO/5F32X.

    • plos.figshare.com
    xlsx
    Updated Aug 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Moses Musah Kabanunye; Benjamin Noble Adjei; Daniel Gyaase; Emmanuel Kweku Nakua; Stephen Opoku Afriyie; Yeetey Enuameh; Michael Owusu (2024). Dataset for study: DOI 10.17605/OSF.IO/5F32X. [Dataset]. http://doi.org/10.1371/journal.pone.0305416.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Aug 26, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Moses Musah Kabanunye; Benjamin Noble Adjei; Daniel Gyaase; Emmanuel Kweku Nakua; Stephen Opoku Afriyie; Yeetey Enuameh; Michael Owusu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Northern part of Ghana lies within the African meningitis belt and has historically been experiencing seasonal meningitis outbreaks. Despite the continuous meningitis outbreak in the region, the risk factors contributing to the occurrence of these outbreaks have not been clearly identified. This study, therefore, sought to describe the clinical characteristics and possible risk factors associated with meningitis outbreaks in the Upper West Region (UWR). A 1:2 matched case-control study was conducted in May-December 2021 to retrospectively investigate possible risk factors for meningitis outbreak in the UWR of Ghana between January and December 2020. Cases were persons with laboratory confirmed meningitis, and controls were persons of similar age and sex without meningitis living in the same house or neighborhood with a confirmed case. Both primary and secondary data including clinical, socio-demographic and laboratory information were collected and entered on standard questionnaires. Data was analyzed using descriptive statistics and conditional logistic regression. Meningitis cases were mostly due to Streptococcus pneumoniae (67/98; 68.37%), followed by Neisseria meningitides serogroup X (27/98; 27.55%). Fever occurred in 94.03% (63/67) of Streptococcus pneumoniae cases and 100% in both Neisseria meningitidis serogroup X (27/27) and Neisseria meningitidis serogroup W groups (3/3). CSF white cell count was significantly associated with the causative agents of meningitis. Conditional logistic regression analysis showed that, passive exposure to tobacco [AOR = 3.65, 95%CI = 1.03–12.96], bedrooms with 3 or more people [AOR = 4.70, 95%CI = 1.48–14.89] and persons with sore throat infection [AOR = 8.97, 95%CI = 2.73–29.43] were independent risk factors for meningitis infection. Headache, fever and neck pain continue to be the most common symptoms reported by meningitis patients. Education and other preventive interventions targeting exposure to tobacco smoke and crowded rooms would be helpful in reducing meningitis outbreaks in the Upper West Region of Ghana.

  17. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Read, Kevin B.; Gibson, Grant; Leahey, Ambery; Peterson, Lynn; Rutley, Sarah; Shi, Julie; Smith, Victoria; Stathis, Kelly (2025). Datasets exploring metadata commonalities across restricted health data sources in Canada [Dataset]. http://doi.org/10.17605/OSF.IO/TXRVE

Datasets exploring metadata commonalities across restricted health data sources in Canada

Explore at:
293 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Nov 21, 2025
Dataset provided by
OSF
Authors
Read, Kevin B.; Gibson, Grant; Leahey, Ambery; Peterson, Lynn; Rutley, Sarah; Shi, Julie; Smith, Victoria; Stathis, Kelly
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Area covered
Canada
Description

This project includes three datasets: the first dataset compiles dataset metadata commonalities that were identified from 48 Canadian restricted health data sources. The second dataset compiles access process metadata commonalities extracted from the same 48 data sources. The third dataset maps metadata commonalities of the first dataset to existing metadata standards including DataCite, DDI, DCAT, and DATS. This mapping exercise was completed to determine whether metadata used by restricted data sources aligned with existing standards for research data.

Search
Clear search
Close search
Google apps
Main menu