Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project includes three datasets: the first dataset compiles dataset metadata commonalities that were identified from 48 Canadian restricted health data sources. The second dataset compiles access process metadata commonalities extracted from the same 48 data sources. The third dataset maps metadata commonalities of the first dataset to existing metadata standards including DataCite, DDI, DCAT, and DATS. This mapping exercise was completed to determine whether metadata used by restricted data sources aligned with existing standards for research data.
Facebook
TwitterThe data is made available by OSF (https://osf.io/by5hu/). This data set contains precisely time-stamped frequency data from several power grids around the world in one-second resolution. The data originally is available for different dates and times. The data is available for just one hour 11 AM - 12PM to make good comparisons assuming the standard demand curve for all countries, days and months.
The raw data for each city is available on OSF (https://osf.io/by5hu/)
Determine how huge are frequency deviations for different parts of the world which indicates their grid stability Identify characteristic of energy storage system in order to provide the primary frequency control to stabilize the grid
This dataset provides frequency deviation data (Actual frequency - Nominal Frequency (50 or 60Hz)). The data is presented in mHz. The data is made available for just one hour of the data which is 11AM -12PM for the following cities:
| Coulumn Name | Description |
|---|---|
| Time Stamp | For one hour (11AM - 12PM) |
| CAP | Frequency deviation of Cape Town |
| TEX | Frequency deviation of College Station, Texas |
| CAN | Frequency deviation of Las Palmas de Gran Canaria, Canary Islands |
| LIS | Frequency deviation of Lisbon |
| SAL | Frequency deviation of Salt Lake City, Utah |
| STO | Frequency deviation of Stockholm |
| TAL | Frequency deviation of Tallinn |
| VES | Frequency deviation of Vestmanna |
| REY | Frequency deviation of Reykjavík |
| LON | Frequency deviation of London |
| SIC | Frequency deviation of Sicily |
| KK | Frequency deviation of Krakau |
| LAU | Frequency deviation of Lauris |
| SPL | Frequency deviation of Split |
| PET | Frequency deviation of St. Petersburg |
https://github.com/CenterForOpenScience/cos.io/blob/master/TERMS_OF_USE.md
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains fMRI data from adults from one paper, with two experiments in it:
Liu, S., Lydic, K., Mei, L., & Saxe, R. (in press, Imaging Neuroscience). Violations of physical and psychological expectations in the human adult brain. Preprint: https://doi.org/10.31234/osf.io/54x6b
All subjects who contributed data to this repository consented explicitly to share their de-faced brain images publicly on OpenNeuro. Experiment 1 has 16 subjects who gave consent to share (17 total), and Experiment 2 has 29 subjects who gave consent to share (32 total). Experiment 1 subjects have subject IDs starting with "SAXNES*", and Experiment 2 subjects have subject IDs starting with "SAXNES2*".
There are (anonymized) event files associated with each run, subject and task, and contrast files.
All event files, for all tasks, have the following cols: onset_time, duration, trial_type and response_time. Below are notes about subject-specific event files.
For the DOTS and VOE event files from Experiment 1, we have the additional columns:
experimentName ('DotsSocPhys' or 'VOESocPhys')correct: at the end of the trial, subs made a response. In DOTS, they indicated whether the dot that disappeared reappeared at a plausible location. In VOE, they pressed a button when the fixation appeared as a cross rather than a plus sign. This col indicates whether the sub responded correctly (1/0)stim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/DOTS/xxxxFor the DOTS event files from Experiment 2, we have the additional columns:
participant: redundant with the file nameexperiment_name: name of the task, redundant with file nameblock_order: which order the dots trials happened in (1 or 2)prop_correct: the proportion of correct responses over the entire runFor the Motion event files from Experiment 2, we have the additional columns:
experiment_name: name of the task, redundant with file nameblock_order: which order the dots trials happened in (1 or 2)event: the index of the current event (0-22)For the spWM event files from Experiment 2, we have the additional columns:
experiment_name: name of the task, redundant with file nameparticipant: redundant with the file nameblock_order: which order the dots trials happened in (1 or 2)run_accuracy_hard: the proportion of correct responses for the hard trials in this runrun_accuracy_easy: the proportion of correct responses for the easy trials in this runFor the VOE event files from Experiment 2, we have the additional columns:
trial_type_specific: identifies trials at one more level of granularity, with respect to domain task and event (e.g. psychology_efficiency_unexp)trial_type_morespecific: similar to trial_type_specific but includes information about domain task scenario and event (e.g. psychology_efficiency_trial-15-over_unexp)experiment_name: name of the task, redundant with file nameparticipant: redundant with the file namecorrect: whether the response for this trial was correct (1, or 0)time_elapsed: how much time as elapsed by the end of this trial, in mstrial_n: the index of the current event correct_answer: what the correct answer was for the attention check (yes or no)subject_correct: whether the subject in fact was correct in their responseevent: fam, expected, or unexpectedidentical_tests: were the test events identical, for this trial?stim_ID: numerical string picking out each unique stimulusscenario_string: string identifying each scenario within each taskdomain: physics, psychology (psychology-action), both (psychology-environment)task: solidity, permanence, goal, efficiency, infer-constraints, or agent-solidityprop_correct:the proportion of correct responses over the entire runstim_path: path to the stimuli, relative to the root BIDS directory, i.e. BIDS/stimuli/VOE/xxxx
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Phylodynamic models generally aim at jointly inferring phylogenetic relationships, model parameters, and more recently, the number of lineages through time, based on molecular sequence data. In the fields of epidemiology and macroevolution these models can be used to estimate, respectively, the past number of infected individuals (prevalence) or the past number of species (paleodiversity) through time. Recent years have seen the development of “total-evidence” analyses, which combine molecular and morphological data from extant and past sampled individuals in a unified Bayesian inference framework. Even sampled individuals characterized only by their sampling time, i.e. lacking morphological and molecular data, which we call occurrences, provide invaluable information to reconstruct the past number of lineages.
Here, we present new methodological developments around the Fossilized Birth-Death Process enabling us to (i) incorporate occurrence data in the likelihood function; (ii) consider piecewise-constant birth, death and sampling rates; and (iii) reconstruct the past number of lineages, with or without knowledge of the underlying tree. We implement our method in the RevBayes software environment, enabling its use along with a large set of models of molecular and morphological evolution, and validate the inference workflow using simulations under a wide range of conditions.
We finally illustrate our new implementation using two empirical datasets stemming from the fields of epidemiology and macroevolution. In epidemiology, we infer the prevalence of the COVID-19 outbreak on the Diamond Princess ship, by taking into account jointly the case count record (occurrences) along with viral sequences for a fraction of infected individuals. In macroevolution, we infer the diversity trajectory of cetaceans using molecular and morphological data from extant taxa, morphological data from fossils, as well as numerous fossil occurrences. The joint modeling of occurrences and trees holds the promise to further bridge the gap between between traditional epidemiology and pathogen genomics, as well as paleontology and molecular phylogenetics.
Methods
Online Appendix : available in the Related Works section
Cetacean Datasets : [copied from the subsection Material and methods > Cetacean data analysis > Molecular, morphological and occurrence datasets of the main paper]
The data can be subdivided in three parts: molecular, morphological, and occurrences. Datasets were collected and analysed separately and are stored on the Open Science Framework (https://osf.io) ([dataset] Aguirre-Fern´andez et al., 2020). Molecular data comes from Steeman et al. (2009), and comprises 6 mitochondrial and 9 nuclear genes, for 87 of the 89 accepted extant cetacean species. Morphological data was obtained from Churchill et al. (2018), the most recent version of a widely-used dataset first produced by Geisler and Sanders (2003). After merging 2 taxa that are now considered synonyms on the Paleobiology Database (PBDB) and removing 3 outgroups that would have violated our model’s assumptions, it now contains 327 variable morphological characters for 27 extant and 90 fossil taxa (mostly identified at the species level but 21 remain undescribed). In order to speed up the analysis we further excluded the undescribed specimens and reduced this dataset to the generic level by selecting the most complete specimen in each genera. Indeed, the computing cost increases quadratically with the maximum number of hidden lineages N, to the point of becoming the bottleneck in our MCMC when N > 100. Given that a mid-Miocene peak diversity between 100 and 220 species is expected (Quental and Marshall, 2010), with less than 100 observed lineages in our inferred tree at that time, N should therefore be about 150. Inferring instead the tree of cetacean genera allows us to reduce N to 70 hidden lineages. The final dataset thus contains 41 extant and 62 extinct genera.
Occurrences come from the PBDB (data archive 9, M. D. Uhen) on May 11, 2020. The dataset initially consisted of all 4678 cetacean occurrences, but the cetacean fossil record is known to be subject to several biases (Uhen and Pyenson, 2007; Marx et al., 2016; Dominici et al., 2020). A detailed exploration (see Online Appendix E) of this occurrence dataset revealed several notable biases. First, an artefactual cluster of occurrences in very recent times, combined with other expected Pleistocene biases (Dominici et al., 2020), led us to remove all Late Pleistocene and Holocene occurrences. Second, we detected substantial variations in fossil recovery per time unit across lineages (see Fig. S10) resulting from oversampling of some species and localities, 295 possibly due to greater abundance or spatio-temporal biases (Dominici et al., 2020). This observation violates our assumption of identical fossil sampling rates among taxa during a given interval. In order to reduce this bias, we retained occurrences identified at the genus level and further aggregated all occurrences belonging to an identical genus found at the same geological formation. In the case of occurrences for which the geological formation was not specified, we used geoplate data combined with stratigraphic interval as a proxy for geological formation. This resulted in a total of 968 occurrences retained for the analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To research the emotional interaction between customers and service staff, single-modal stimuli are being used to activate subjects’ emotions while multimodal emotion stimuli with better efficiency are often neglected. This study aims to construct a multimodal emotion stimuli database (CCSIAS) with video records of real work status of 29 service staff and audio clips of interactions between customers and service staff by setting up wide-angle cameras and searching in company’s Ocean Engine for 15 consecutive days. First, we developed a tool to assess the emotional statuses of customers and service staff in Study 1. Second, 40 Masters and PhD students were invited to assess the audio and video data to evaluate the emotional states of customers and service staff in Study 2, using the tools developed in Study 1. Third, 118 participants were recruited to test the results from Study 2 to ensure the stability of the derived data. The results showed that 139 sets of stable emotional audio & video data were constructed (26 sets were high, 59 sets were medium and 54 sets were low). The amount of emotional information is important for the effective activation of participants’ emotional states, and the degree of emotional activation of video data is significantly higher than that of the audio data. Overall, it was shown that the study of emotional interaction phenomena requires a multimodal dataset. The CCSIAS (https://osf.io/muc86/) can extend the depth and breadth of emotional interaction research and can be applied to different emotional states between customers and service staff activation in the fields of organizational behavior and psychology.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The sample included in this dataset represents children who participated in a cross-sectional study, a smaller cohort of which was followed up as part of a longitudinal study reported elsewhere (Bull et al., 2021). In the original study, 347 children were recruited. As data was found to be likely missing completely at random (χ2 = 29.445, df = 24, p = .204, Little, 1998), listwise deletion was used, and 23 observations were deleted from the original dataset. This dataset includes three hundred and twenty-four participants that composed the final sample of this study (162 boys, Mage = 6.2 years, SDage = 0.3 years). Children in this sample were in their second year of kindergarten (i.e., the year before starting primary school) in Singapore. The dataset includes children's sociodemographic information (i.e., age and sex) and performance on different general cognitive and mathematical skills. Mathematical tasks: - Computer-based 0-10 and 0-100 number line task - Mathematical Reasoning and Numerical Operations subtests from the Wechsler Individual Achievement Test II (WIAT II). Though the Numerical operations subtest is not used in this study. General cognitive tasks: - Peabody Picture Vocabulary Test (Vocabulary) - Raven’s Progressive Matrices Test (Non-verbal reasoning) The variables included in this dataset are: Age = Child’s age (in months) Sex = Boy/Girl (parent reported; boy=1, girl=2) Ravens = Non-verbal reasoning (Raven’s Progressive Matrices test) Ppvt = Vocabulary raw score (Peabody Picture Vocabulary Test) Maths_reason = Mathematical reasoning raw score (Math Reasoning subtest from the Wechsler Individual Achievement Test II) Num_Ops = Numerical Operations raw score (Numerical Operations subtest from the Wechsler Individual Achievement Test II, not used in this study) NLE10 = 0-10 number line (Percent absolute error) NLE100 = 0-100 number line (Percent absolute error) This dataset overlaps with the dataset that is the basis for: Ruiz, C., Kohnen, S., & Bull, R. (2023) Number Line Estimation Patterns and Their Relationship with Mathematical Performance. Journal of Numerical Cognition. Advance Online Publication https://doi.org/10.23668/psycharchives.12698 That project’s corresponding OSF page can be found here: https://osf.io/jat5h/ and the dataset is stored under embargo here: https://doi.org/10.25949/22558528.v1
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Experiment
20 adult participants (18 participants consented to open data sharing and are included here) watched video clips from Sesame Street, in which the audio was played either forward or reversed. Code and stimuli descriptions shared here: https://osf.io/whsb7/. We also scanned participants on two localizer tasks.
SS-BlockedLang Language Task (litshort) 2x2 block task design with four conditions: Forward Dialogue, Forward Monologue, Backward Dialogue, and Backward Monologue. Participants were asked to watch the 20-second videos and press a button on an in-scanner button box when they saw a still image of Elmo appear on the screen after each 20-second block. Participants completed 4 runs, each 6 min 18 sec long. Each run contained unique clips, and participants never saw a Forward and Backward version of the same clip. Each run contained 3 sets of 4 blocks, one of each condition (total of 12 blocks), with 22-second rest blocks after each set of 4 blocks. Forward and Backward versions of each clip were counterbalanced between participants (randomly assigned Set A or Set B). Run order was randomized for each participant.
SS-IntDialog Language Task (litlong) 1–3-minute dialogue clips of Sesame Street in which one character’s audio stream was played Forward and the other was played Backward. Additional sounds in the video (e.g., blowing bubbles, a crash from something falling) were played forwards. Participants watched the videos and pressed a button on an in-scanner button box when they saw a still image of Elmo appear on the screen immediately after each block. Participants completed 2 runs, each approximately 8 min 52 sec long. Each run contained unique clips, and participants never saw a version of the same clip with the Forward/Backward streams reversed. Each run contained 3 clips, 1-3 minutes each, presented in the same order. Between each video, as well as at the beginning and end of the run, there was a 22-second fixation block. Versions of each clip with the opposite character Forward and Backward were counterbalanced between participants (randomly assigned Set A or Set B). 11 participants saw version A, and 9 participants saw version B (1 run from group A was excluded due to participant falling asleep, and one run from group B was excluded due to motion). Run order was randomized for each participant (random sequence 1-2).
Auditory Language Localizer (langloc) We used a localizer task previously validated for identifying high-level language processing regions (Scott et al., 2017). Participants listened to Intact and Degraded 18-second blocks of speech. The Intact condition consisted of audio clips of spoken English (e.g., clips from interviews in which one person is speaking), and the Degraded condition consisted of acoustically degraded versions of these clips. Participants viewed a black dot on a white background during the task while passively listening to the auditory stimuli. 14-second fixation blocks (no sound) were present after every 4 speech blocks, as well as at the beginning and end of each run (5 fixation blocks per run). Participants completed two runs, each approximately 6 min 6 sec long. Each run contained 16 blocks of speech (8 intact, 8 degraded).
Theory of Mind Localizer (tomloc) We used a task previously validated for identifying regions that are involved in ToM and social cognition (Dodell-Feder et al., 2011). Participants read short stories in two conditions: False Beliefs and False Photos. Stories in the False Beliefs condition described scenarios in which a character holds a false belief. Stories in the False Photos condition described outdated photographs and maps. Each story was displayed in white text on a black screen for 10 seconds, followed by a 4-second true/false question based on the story (which participants responded to via the in-scanner button box), followed by 12 seconds of a blank screen (rest). Each run contained 10 blocks. Participants completed two runs, each approximately 4 min 40 sec long.
Facebook
TwitterThe RCS-Dem dataset reports estimates of religious demographics, both country by country and region by region. RCS was created to fulfill the unmet need for a dataset on the religious dimensions of countries of the world, with the state-year as the unit of observation. It covers 220 independent states, 26 selected substate entities, and 41 geographically separated dependencies, for every year from 2015 back to 1900 and often 1800 (more than 42,000 state-years). It estimates populations and percentages of adherents of 100 religious denominations including second level subdivisions within Christianity and Islam, along with several complex categories such as "Western Christianity." RCS is designed for easy merger with datasets of the Correlates of War and Polity projects, datasets by the United Nations, the Religion And State datasets by Jonathan Fox, and the ARDA national profiles.
Facebook
TwitterThis online study examined the impact on smoking and vaping craving of exposure to smoking (i.e., tobacco cigarette), vaping (i.e., cigalike and tank system device), or neutral cues. Participants (n=1120 recruited, n=936 for analysis) included UK adult current or former smokers who either vaped or did not vape. They were randomised to view one of four cue videos. The primary outcome was urge to smoke; secondary outcomes were urge to vape, desire to smoke and vape, as well as intention to quit smoking or remain abstinent from smoking. We found no evidence that exposure to videos of smoking or vaping cued smoking urges, and no evidence of interaction effects between cue exposure and smoking and vaping status. The study highlights the potential limitations of using an online setting for assessing craving. The study protocol was preregistered on the Open Science Framework: https://osf.io/a6jpu/.PLEASE NOTE: Any values that are listed as NULL in the data sheet are either missing values or not applicable values.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is a data set consisting of simultaneous electroencephalography (EEG), electrospinography (ESG), electroneurography (ENG), and electromyography (EMG) recordings from 26 participants. There were nine different recording conditions: i) resting state with eyes open, ii) mixed median nerve stimulation (arm nerve), iii) mixed tibial nerve stimulation (leg nerve), iv) sensory nerve stimulation of the index finger, v) sensory nerve stimulation of the middle finger, vi) simultaneous senory nerve stimulation of the index and middle finger, vii) sensory nerve stimulation to the first toe, viii) sensory nerve stimulation to the second toe, ix) simultaneous senory nerve stimulation to the first and second toe. For each participant, there is i) the simultaneous EEG-ESG-ENG-EMG-recording which also includes electrocardiographic and respiratory signals, ii) ESG electrode positions. For a detailed description please see the following article: XXX. This study was pre-registered on OSF: https://osf.io/mjdha.
Should you make use of this data set in any publication, please cite the following article: XXXX
This data set is made available under the Creative Commons CC0 license. For more information, see https://creativecommons.org/share-your-work/public-domain/cc0/
This data set is organized according to the Brain Imaging Data Structure specification. For more information on this data specification, see https://bids-specification.readthedocs.io/en/stable/ Each participant's data are in one subdirectory (e.g., 'sub-001'), which contains the raw data in eeglab format. Please note that the EEG channel Fz was referenced to i) the EEG reference (right mastoid, RM, channel name: Fz) and ii) the ESG reference (6th thoracic vertebra, TH6, channel name: Fz-TH6). Should you have any questions about this data set, please contact nierula@cbs.mpg.de or eippert@cbs.mpg.de.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The data that support the findings of this study are available from the corresponding author upon reasonable request and approval of the HR consultancy firm the data were obtained from. The Mplus code for the CFA and multilevel analyses is available at: https://osf.io/6f47s/
This study draws from brand positioning research to introduce the notions of points-of-relevance and points-of-difference to employer image research. Similar to prior research, this means that we start by investigating the relevant image attributes (points-of-relevance) that potential applicants use for judging organizations' attractiveness as an employer. However, we go beyond past research by examining whether the same points-of-relevance are used within and across industries. Next, we further extend current research by identifying which of the relevant image attributes also serve as points-of-difference for distinguishing between organizations and industries. The sample consisted of 24 organizations from 6 industries (total N = 7171). As a first key result, across industries and organizations, individuals attached similar importance to the same instrumental (job content, working conditions, and compensation) and symbolic (innovativeness, gentleness, and competence) image attributes in judging organizational attractiveness. Second, organizations and industries varied significantly on both instrumental and symbolic image attributes, with job content and innovativeness emerging as the strongest points-of-difference. Third, most image attributes showed greater variation between industries than between organizations, pointing at the importance of studying employer image at the industry level. Implications for recruitment research, employer branding, and best employer competitions are discussed.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.
Additional processed data and analysis code are available in an OSF repository.
Details about the study are included here.
We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.
To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.
We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.
Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.
Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.
On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).
Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.
The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).
This repository contains three main folders, complying with BIDS specifications.
- Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
- Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory.
- Derivatives contain all additional whole brain analyses performed on this dataset.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.
Additional processed data and analysis code are available in an OSF repository.
Details about the study are included here.
We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.
To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.
We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.
Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.
Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.
On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).
Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.
The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).
This repository contains three main folders, complying with BIDS specifications.
- Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
- Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory.
- Derivatives contain all additional whole brain analyses performed on this dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
While art is omnipresent in human history, the neural mechanisms of how we perceive, value and differentiate art has only begun to be explored. Functional magnetic resonance imaging (fMRI) studies suggested that art acts as secondary reward, involving brain activity in the ventral striatum and prefrontal cortices similar to primary rewards such as food. However, potential similarities or unique characteristics of art-related neuroscience (or neuroesthetics) remain elusive, also because of a lack of adequate experimental tools: the available collections of art stimuli often lack standard image definitions and normative ratings. Therefore, we here provide a large set of well-characterized, novel art images for use as visual stimuli in psychological and neuroimaging research. The stimuli were created using a deep learning algorithm that applied different styles of popular paintings (based on artists such as Klimt or Hundertwasser) on ordinary animal, plant and object images which were drawn from established visual stimuli databases. The novel stimuli represent mundane items with artistic properties with proposed reduced dimensionality and complexity compared to paintings. In total, 2,332 novel stimuli are available open access as “art.pics” database at https://osf.io/BTWNQ/ with standard image characteristics that are comparable to other common visual stimuli material in terms of size, variable color distribution, complexity, intensity and valence, measured by image software analysis and by ratings derived from a human experimental validation study [n = 1,296 (684f), age 30.2 ± 8.8 y.o.]. The experimental validation study further showed that the art.pics elicit a broad and significantly different variation in subjective value ratings (i.e., liking and wanting) as well as in recognizability, arousal and valence across different art styles and categories. Researchers are encouraged to study the perception, processing and valuation of art images based on the art.pics database which also enables real reward remuneration of the rated stimuli (as art prints) and a direct comparison to other rewards from e.g., food or money.Key Messages: We provide an open access, validated and large set of novel stimuli (n = 2,332) of standardized art images including normative rating data to be used for experimental research. Reward remuneration in experimental settings can be easily implemented for the art.pics by e.g., handing out the stimuli to the participants (as print on premium paper or in a digital format), as done in the presented validation task. Experimental validation showed that the art.pics’ images elicit a broad and significantly different variation in subjective value ratings (i.e., liking, wanting) across different art styles and categories, while size, color and complexity characteristics remained comparable to other visual stimuli databases.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary materials for analysis of twitter activity during BTS livestream concerts in 2021Compressed set of csv files for analysis of twitter activity during live streamed BTS concerts in 2021. Depersonalized tweet datasets, subset of tweets with content codes, and setlist timing for the four concerts. Descriptions of data preparation and analysis on github: https://github.com/finn42/Concert_Twt_OpenDescriptive pdf of supplimentary materials.Figures of secondary analysis of Corona Concert Survey data (https://osf.io/skg7h/)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Northern part of Ghana lies within the African meningitis belt and has historically been experiencing seasonal meningitis outbreaks. Despite the continuous meningitis outbreak in the region, the risk factors contributing to the occurrence of these outbreaks have not been clearly identified. This study, therefore, sought to describe the clinical characteristics and possible risk factors associated with meningitis outbreaks in the Upper West Region (UWR). A 1:2 matched case-control study was conducted in May-December 2021 to retrospectively investigate possible risk factors for meningitis outbreak in the UWR of Ghana between January and December 2020. Cases were persons with laboratory confirmed meningitis, and controls were persons of similar age and sex without meningitis living in the same house or neighborhood with a confirmed case. Both primary and secondary data including clinical, socio-demographic and laboratory information were collected and entered on standard questionnaires. Data was analyzed using descriptive statistics and conditional logistic regression. Meningitis cases were mostly due to Streptococcus pneumoniae (67/98; 68.37%), followed by Neisseria meningitides serogroup X (27/98; 27.55%). Fever occurred in 94.03% (63/67) of Streptococcus pneumoniae cases and 100% in both Neisseria meningitidis serogroup X (27/27) and Neisseria meningitidis serogroup W groups (3/3). CSF white cell count was significantly associated with the causative agents of meningitis. Conditional logistic regression analysis showed that, passive exposure to tobacco [AOR = 3.65, 95%CI = 1.03–12.96], bedrooms with 3 or more people [AOR = 4.70, 95%CI = 1.48–14.89] and persons with sore throat infection [AOR = 8.97, 95%CI = 2.73–29.43] were independent risk factors for meningitis infection. Headache, fever and neck pain continue to be the most common symptoms reported by meningitis patients. Education and other preventive interventions targeting exposure to tobacco smoke and crowded rooms would be helpful in reducing meningitis outbreaks in the Upper West Region of Ghana.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project includes three datasets: the first dataset compiles dataset metadata commonalities that were identified from 48 Canadian restricted health data sources. The second dataset compiles access process metadata commonalities extracted from the same 48 data sources. The third dataset maps metadata commonalities of the first dataset to existing metadata standards including DataCite, DDI, DCAT, and DATS. This mapping exercise was completed to determine whether metadata used by restricted data sources aligned with existing standards for research data.