62 datasets found
  1. dataset to accompany Managing COVID-19 spread with voluntary public-health...

    • zenodo.org
    • explore.openaire.eu
    bin
    Updated Jun 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Kasson; Peter Kasson; S.C.L. Kamerlin; S.C.L. Kamerlin (2020). dataset to accompany Managing COVID-19 spread with voluntary public-health measures: Sweden as a case study for pandemic control [Dataset]. http://doi.org/10.5281/zenodo.3836195
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 18, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Peter Kasson; Peter Kasson; S.C.L. Kamerlin; S.C.L. Kamerlin
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Sweden
    Description

    Data from simulations of COVID-19 spread in Sweden under different public-health measures. Results from individual-based models.

  2. COVID-19 Measures Dataset (All World)

    • kaggle.com
    Updated Jan 23, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mesum Raza Hemani (2021). COVID-19 Measures Dataset (All World) [Dataset]. https://www.kaggle.com/mesumraza/covid19-measures-dataset-all-world/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 23, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mesum Raza Hemani
    Area covered
    World
    Description

    Context

    There's a story behind every dataset and here's your opportunity to share yours.

    The COVID-19 Government Measures Dataset puts together all the measures implemented by governments worldwide in response to the Coronavirus pandemic. Data collection includes secondary data review. The researched information available falls into five categories:

    Social distancing Movement restrictions Public health measures Social and economic measures Lockdowns

    Content

    Updated last 10/12/2020 The #COVID19 Government Measures Dataset puts together all the measures implemented by governments worldwide in response to the Coronavirus pandemic. Data collection includes secondary data review. The researched information available falls into five categories: - Social distancing - Movement restrictions - Public health measures - Social and economic measures - Lockdowns Each category is broken down into several types of measures.

    ID ISO COUNTRY REGION ADMIN_LEVEL_NAME PCODE LOG_TYPE CATEGORY MEASURE_TYPE TARGETED_POP_GROUP COMMENTS NON_COMPLIANCE DATE_IMPLEMENTED SOURCE SOURCE_TYPE LINK ENTRY_DATE ALTERNATIVE SOURCE

    Acknowledgements

    We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.

    Inspiration

    Your data will be in front of the world's largest data science community. What questions do you want to see answered?

  3. Predicting Epidemic Risk from Past Temporal Contact Data

    • plos.figshare.com
    zip
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eugenio Valdano; Chiara Poletto; Armando Giovannini; Diana Palma; Lara Savini; Vittoria Colizza (2023). Predicting Epidemic Risk from Past Temporal Contact Data [Dataset]. http://doi.org/10.1371/journal.pcbi.1004152
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Eugenio Valdano; Chiara Poletto; Armando Giovannini; Diana Palma; Lara Savini; Vittoria Colizza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Understanding how epidemics spread in a system is a crucial step to prevent and control outbreaks, with broad implications on the system’s functioning, health, and associated costs. This can be achieved by identifying the elements at higher risk of infection and implementing targeted surveillance and control measures. One important ingredient to consider is the pattern of disease-transmission contacts among the elements, however lack of data or delays in providing updated records may hinder its use, especially for time-varying patterns. Here we explore to what extent it is possible to use past temporal data of a system’s pattern of contacts to predict the risk of infection of its elements during an emerging outbreak, in absence of updated data. We focus on two real-world temporal systems; a livestock displacements trade network among animal holdings, and a network of sexual encounters in high-end prostitution. We define the node’s loyalty as a local measure of its tendency to maintain contacts with the same elements over time, and uncover important non-trivial correlations with the node’s epidemic risk. We show that a risk assessment analysis incorporating this knowledge and based on past structural and temporal pattern properties provides accurate predictions for both systems. Its generalizability is tested by introducing a theoretical model for generating synthetic temporal networks. High accuracy of our predictions is recovered across different settings, while the amount of possible predictions is system-specific. The proposed method can provide crucial information for the setup of targeted intervention strategies.

  4. e

    Metallicity spread and dispersion among P1 stars - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Aug 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Metallicity spread and dispersion among P1 stars - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/e2d1184b-74b8-5551-9fb8-d0794b4b3f75
    Explore at:
    Dataset updated
    Aug 17, 2025
    Description

    Multiple populations are ubiquitous in the old massive globular clusters (GCs) of the Milky Way. It is still unclear how they arose during the formation of a GC. The topic of iron and metallicity variations has recently attracted attention with the measurement of iron variations among the primordial population (P1) stars of Galactic GCs. We use the spectra of more than 8000 RGB stars in 21 Galactic GCs observed with MUSE to derive individual stellar metallicities [M/H]. For each cluster, we use the HST photometric catalogs to separate the stars into two main populations (P1 and P2). We measure the metallicity spread within the primordial population of each cluster by combining our metallicity measurements with the stars {Delta}F275W,F814W pseudo-color. We also derive metallicity dispersions ({sigma}[M/H]) for the P1 and P2 stars of each GC. In all but three GCs, we measure a significant correlation between the metallicity and the {Delta}F275W,F814W pseudo-color of the P1 stars such that stars with larger {Delta_F275W,F814W_ have higher metallicities. We measure metallicity spreads that range from 0.03 to 0.24dex and correlate with the GC masses. As for the intrinsic metallicity dispersions, when combining the P1 and P2 stars, we measure values ranging from 0.02 dex to 0.08dex that correlate very well with the GC masses. We compared the metallicity dispersion among the P1 and P2 stars and found that the P2 stars have metallicity dispersions that are smaller or equal to that of the P1 stars. We find that both the metallicity spreads of the P1 stars (from the {Delta_F275W,F814W_ spread in the chromosome maps) and the metallicity dispersions ({sigma_[M/H]_) correlate with the GC masses, as predicted by some theoretical self-enrichment models presented in the literature.

  5. e

    Globular cluster intrinsic iron abundance spreads - Dataset - B2FIND

    • b2find.eudat.eu
    Updated Apr 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Globular cluster intrinsic iron abundance spreads - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/b313307c-7660-505c-81fa-687c03cefee9
    Explore at:
    Dataset updated
    Apr 28, 2023
    Description

    We present an up-to-date catalog of intrinsic iron abundance spreads in the 55 Milky Way globular clusters (GCs) for which sufficiently precise spectroscopic measurements are available. Our method combines multiple data sets when possible to improve the statistics, taking into account the fact that different methods and instruments can lead to systematically offset metallicities. Only high spectral resolution (R>14000) studies that measure the equivalent widths of individual iron lines are found to have uncertainties on the metallicities of the individual stars that can be calibrated sufficiently well for the intrinsic dispersion to be separated cleanly from a random measurement error. The median intrinsic iron spread is found to be 0.045dex, which is small but unambiguously measured to be nonzero in most cases. There is large variation between clusters, but more luminous GCs, above 10^5^L_{sun}_, have increasingly large iron spreads on average; no trend between the iron spread and metallicity is found. Cone search capability for table J/ApJS/245/5/table1 (Derived dispersions {sigma}0 and average metallicity [Fe/H] for each cluster)

  6. p

    Lockdown data-V6.0.csv

    • psycharchives.org
    Updated Jun 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). Lockdown data-V6.0.csv [Dataset]. https://www.psycharchives.org/en/item/8a0c3db3-d4bf-46dd-8ffc-557430d45ddd
    Explore at:
    Dataset updated
    Jun 4, 2020
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The outbreak of the COVID-19 pandemic has prompted the German government and the 16 German federal states to announce a variety of public health measures in order to suppress the spread of the coronavirus. These non-pharmaceutical measures intended to curb transmission rates by increasing social distancing (i.e., diminishing interpersonal contacts) which restricts a range of individual behaviors. These measures span moderate recommendations such as physical distancing, up to the closures of shops and bans of gatherings and demonstrations. The implementation of these measures are not only a research goal for themselves but have implications for behavioral research conducted in this time (e.g., in form of potential confounder biases). Hence, longitudinal data that represent the measures can be a fruitful data source. The presented data set contains data on 14 governmental measures across the 16 German federal states. In comparison to existing datasets, the data set at hand is a fine-grained daily time series tracking the effective calendar date, introduction, extension, or phase-out of each respective measure. Based on self-regulation theory, measures were coded whether they did not restrict, partially restricted or fully restricted the respective behavioral pattern. The time frame comprises March 08, 2020 until May 15, 2020. The project is an open-source, ongoing project with planned continued updates in regular (approximately monthly) intervals. New variables include restrictions on travel and gastronomy. The variable trvl (travel) comprises the following categories: fully restricted (=2) reflecting a potential general ban to travel within Germany (except for sound reasons like health or business); partially restricted (=1): travels are allowed but may be restricted through prohibition of accommodation or entry ban for certain groups (e.g. people from risk areas); free (=0): no travel and accommodation restrictions in place). The variable gastr (gastronomy) comprises: fully restricted (=2): closure of restaurants or bars; partially restricted (=1): Only take-away or food delivery services are allowed; free (=0): restaurants are allowed to open without restrictions). Further, the variables msk (recommendations to wear a mask) and zoo (restrictions of zoo visits) have been adjusted.:

  7. Spectral dataset of daylights and surface properties of natural objects...

    • zenodo.org
    bin, csv
    Updated Aug 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Takuma Morimoto; Takuma Morimoto; Cong Zhang; Kazuho Fukuda; Keiji Uchikawa; Cong Zhang; Kazuho Fukuda; Keiji Uchikawa (2024). Spectral dataset of daylights and surface properties of natural objects measured in Japan [Dataset]. http://doi.org/10.5281/zenodo.5217752
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    Aug 28, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Takuma Morimoto; Takuma Morimoto; Cong Zhang; Kazuho Fukuda; Keiji Uchikawa; Cong Zhang; Kazuho Fukuda; Keiji Uchikawa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Japan
    Description

    This is a spectral dataset of natural objects and daylights collected in Japan.

    We collected 359 natural objects and measured the reflectance of all objects and the transmittance of 75 leaves. We also measured daylights from dawn till dusk on four different days using a white plate placed (i) under the direct sun and (ii) under the casted shadow (in total 359 measurements). We also separately measured daylights at five different locations (including a sports ground, a space between tall buildings and a forest) with minimum time intervals to reveal the influence of surrounding environments on the spectral composition of daylights reaching the ground (in total 118 measurements).

    If you use this dataset in your research, please cite the following publication.

    Morimoto, T., Zhang, C., Fukuda, K., & Uchikawa, K. (2022). Spectral measurement of daylights and surface properties of natural objects in Japan. Optics express, 30(3), 3183. https://doi.org/10.1364/OE.441063

    Dataset contains following Excel spread sheets and csv files:

    (A) Surface properties of natural objects

    (A-1) Reflectance_ver1-2.xlsx and .csv

    (A-2) Transmittance_FrontSideUp_ver1-2.xlsx and .csv

    (A-2) Transmittance_BackSideUp_ver1-2.xlsx and .csv

    (B) Daylight measurements

    (B-1) Daylight_TimeLapse_v1-2.xlsx and .csv

    (B-2) Daylight_DifferentLocations_v1-2.xlsx and .csv

    Data description

    (A) Surface properties

    (A-1) Reflectance_ver1-2.xlsx and .csv

    This file contains surface spectral reflectance data (380 - 780 nm, 5 nm step) of 359 natural objects, including 200 flowers, 113 leaves, 23 fruits, 6 vegetables, 8 barks, and 9 stones measured by a spectrophotometer (SR-2A, Topcon, Tokyo, Japan). Photos of all samples are included in the .xlsx file.

    For the analysis presented in the paper, we identified reflectance pairs that have a Pearson’s correlation coefficient across 401 spectral channels of more than 0.999 and removed one of reflectances from each pair. The column 'Used in analysis' indicates whether or not each sample is used for the analysis (TRUE indicates used and FALSE indicate not used).

    At the time of collection, we noted the scientific names of flowers, leaves and barks from a name board provided by the Tokyo Institute of Technology in which samples are collected. If not available, we used a smartphone software which automatically identifies the scientific name from an input image (PictureThis - Plant Identifier developed by Glority Global Group Ltd.). The names of 2 flowers and 9 stones whose name could not be identified through either method were left blank.

    (A-2) Transmittance_FrontSideUp_v1-2.xlsx and .csv

    This file contains surface spectral transmittance data (380 - 780 nm, 5 nm step) for 75 leaves measured by a spectrophotometer (SR-2A, Topcon, Tokyo, Japan). Photos of all samples are included in the .xlsx file.

    For this data, the transmittance was measured with the front-side of leaves up (the light was transmitted from the back side of the leaves). This is the data presented in the associated article.

    (A-3) Transmittance_BackSideUp_v1-2.xlsx and .csv

    Spectral transmittance data of the same leaves presented in (A-2).

    For this data, the transmittance was measured with the back-side of leaves up (the light was transmitted from the front side of the leaves).

    (B) Daylight measurements

    (B-1) Daylight_TimeLapse_ver1-2.xlsx and .csv

    This file contains daylight spectra from sunrise to sunset on four different days (2013/11/20, 2013/12/24, 2014/07/03 and 2014/10/27) measured by a spectrophotometer (SR-LEDW, Topcon, Tokyo, Japan) with a wavelength range from 380 nm to 780 nm with 1 nm step. We measured the reflected light from the white calibration plate placed either under a direct sunlight or under a casted shadow.

    The column 'Cloud cover' provides visual estimate of percentage of cloud cover across the sky at the time of each measurement. The column 'Red lamp' indicates whether an aircraft warning lamp at the measurement site was on (circle) or off (blank).

    (B-2) Daylight_DifferentLocations_ver1-2.xlsx and .csv

    This file includes daylight spectra measured at five different sites within the Suzukakedai Campus of Tokyo Institute of Technology with minimum time gap on 2014/07/08, using a spectroradiometer (IM-1000, Topcon) from 380 nm to 780 nm with 1 nm step. The instrument was oriented either towards the sun or towards the zenith sky. When the instrument was oriented to the sun, we measured spectra in two ways: (i) one using a black cylinder covering the photodetector and (ii) the other without using a cylinder.

    The column 'Cylinder' indicates whether the black cylinder was used (circle) or not (cross). The column 'Cloud cover' shows the visual estimate of percentage of cloud cover at the time of each measurement. The column 'Sun hidden in clouds' denotes whether the measurement was taken when the sun was covered by clouds (circle) or not (blank).

  8. Data from: Variation in trends of consumption based carbon accounts

    • zenodo.org
    • data.europa.eu
    bin, csv
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Wood; Richard Wood; Daniel Moran; Daniel Moran; Konstantin Stadler; Konstantin Stadler; João F. D. Rodrigues; João F. D. Rodrigues (2020). Variation in trends of consumption based carbon accounts [Dataset]. http://doi.org/10.5281/zenodo.3187310
    Explore at:
    bin, csvAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Richard Wood; Richard Wood; Daniel Moran; Daniel Moran; Konstantin Stadler; Konstantin Stadler; João F. D. Rodrigues; João F. D. Rodrigues
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this work we present results of all the major global models and normalise the model results by looking at changes over time relative to a common base year value.
    We give an analysis of the variability across the models, both before and after normalisation in order to give insights into variance at national and regional level.
    A dataset of harmonised results (based on means) and measures of dispersion is presented, providing a baseline dataset for CBCA validation and analysis.

    The dataset is intended as a goto dataset for country and regional results of consumption and production based accounts. The normalised mean for each country/region is the principle result that can be used to assess the magnitude and trend in the emission accounts. However, an additional key element of the dataset are the measures of robustness and spread of the results across the source models. These metrics give insight into the amount of trust should be placed in the individual country/region results.

    Code at https://doi.org/10.5281/zenodo.3181930

  9. Covid-19 Highest City Population Density

    • kaggle.com
    Updated Mar 25, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    lookfwd (2020). Covid-19 Highest City Population Density [Dataset]. https://www.kaggle.com/lookfwd/covid19highestcitypopulationdensity/tasks
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 25, 2020
    Dataset provided by
    Kaggle
    Authors
    lookfwd
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    This is a dataset of the most highly populated city (if applicable) in a form easy to join with the COVID19 Global Forecasting (Week 1) dataset. You can see how to use it in this kernel

    Content

    There are four columns. The first two correspond to the columns from the original COVID19 Global Forecasting (Week 1) dataset. The other two is the highest population density, at city level, for the given country/state. Note that some countries are very small and in those cases the population density reflects the entire country. Since the original dataset has a few cruise ships as well, I've added them there.

    Acknowledgements

    Thanks a lot to Kaggle for this competition that gave me the opportunity to look closely at some data and understand this problem better.

    Inspiration

    Summary: I believe that the square root of the population density should relate to the logistic growth factor of the SIR model. I think the SEIR model isn't applicable due to any intervention being too late for a fast-spreading virus like this, especially in places with dense populations.

    After playing with the data provided in COVID19 Global Forecasting (Week 1) (and everything else online or media) a bit, one thing becomes clear. They have nothing to do with epidemiology. They reflect sociopolitical characteristics of a country/state and, more specifically, the reactivity and attitude towards testing.

    The testing method used (PCR tests) means that what we measure could potentially be a proxy for the number of people infected during the last 3 weeks, i.e the growth (with lag). It's not how many people have been infected and recovered. Antibody or serology tests would measure that, and by using them, we could go back to normality faster... but those will arrive too late. Way earlier, China will have experimentally shown that it's safe to go back to normal as soon as your number of newly infected per day is close to zero.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F197482%2F429e0fdd7f1ce86eba882857ac7a735e%2Fcovid-summary.png?generation=1585072438685236&alt=media" alt="">

    My view, as a person living in NYC, about this virus, is that by the time governments react to media pressure, to lockdown or even test, it's too late. In dense areas, everyone susceptible has already amble opportunities to be infected. Especially for a virus with 5-14 days lag between infections and symptoms, a period during which hosts spread it all over on subway, the conditions are hopeless. Active populations have already been exposed, mostly asymptomatic and recovered. Sensitive/older populations are more self-isolated/careful in affluent societies (maybe this isn't the case in North Italy). As the virus finishes exploring the active population, it starts penetrating the more isolated ones. At this point in time, the first fatalities happen. Then testing starts. Then the media and the lockdown. Lockdown seems overly effective because it coincides with the tail of the disease spread. It helps slow down the virus exploring the long-tail of sensitive population, and we should all contribute by doing it, but it doesn't cause the end of the disease. If it did, then as soon as people were back in the streets (see China), there would be repeated outbreaks.

    Smart politicians will test a lot because it will make their condition look worse. It helps them demand more resources. At the same time, they will have a low rate of fatalities due to large denominator. They can take credit for managing well a disproportionally major crisis - in contrast to people who didn't test.

    We were lucky this time. We, Westerners, have woken up to the potential of a pandemic. I'm sure we will give further resources for prevention. Additionally, we will be more open-minded, helping politicians to have more direct responses. We will also require them to be more responsible in their messages and reactions.

  10. d

    Measuring Monographs - Dataset - B2FIND

    • b2find.dkrz.de
    • b2find.eudat.eu
    Updated Aug 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Measuring Monographs - Dataset - B2FIND [Dataset]. https://b2find.dkrz.de/dataset/53812f6e-f141-5c8d-899b-d53289995ff1
    Explore at:
    Dataset updated
    Aug 10, 2025
    Description

    In the Humanities and Social Sciences (HSS), the monograph is an important means of communicating scientific results. As in the field of STM, the value of scholarly output needs to be assessed. This is done by bibliometric measures and qualitative methods. Bibliometric measures based on articles do not function well in the field of HSS, where monographs are the norm. The qualitative methods which take into account several stakeholders are labour intensive and the results are dependent on self-assessment of the respondents, which may introduce bias. In the case of humanities, the picture becomes even less clear due to uncertainties about the stakeholders.This dataset consists of over 25,000 downloads by more than 1,500 providers, spread over 859 monographs.

  11. e

    Learning conditions during COVID-19 Students (SUF edition) - Dataset -...

    • b2find.eudat.eu
    Updated Oct 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Learning conditions during COVID-19 Students (SUF edition) - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/27ac5f95-6804-596d-bdc6-b730897b38b9
    Explore at:
    Dataset updated
    Oct 6, 2024
    Description

    Full edition for scientific use. This dataset consists of five separate datafiles representing three measurement points and two types of methods (cross-sectional and longitudinal) for the project "Lernen unter COVID-19-Bedingungen Studierende" [Learning conditions during COVID-19 Students] of the University of Vienna. Measurement point 1 (cross-sectional and longitudinal) contains 6074 data records of students in Austria attending higher education and was surveyed in March/April 2020. Measurement point 2 (cross-sectional) contains 3732 data records of the same target group and Measurement point 2 (longitudinal) contains 1819 data records, both were surveyed in April/May 2020. Measurement point 3 (cross-sectional) contains 661 data records and Measurement point 3 (longitudinal) contains 1386 data records, both were surveyed in June 2020. The dataset contains sociodemographic variables as well as items that can be used to operationalize positive emotion, intrinsic learning motivation, competence, autonomy, social relatedness, engagement, perseverance, gender role self-concept, procrastination and self-regulated learning (SRL) in terms of goal setting and planning, time management and metacognition. Furthermore, the dataset contains information on changes in these variables over time, variables measuring the degree to which students are informed on COVID-19 measures as well as items that explore perception of measures implemented to contain the spread of corona virus.

  12. m

    Coronavirus Panoply.io for Database Warehousing and Post Analysis using...

    • data.mendeley.com
    Updated Feb 4, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pranav Pandya (2020). Coronavirus Panoply.io for Database Warehousing and Post Analysis using Sequal Language (SQL) [Dataset]. http://doi.org/10.17632/4gphfg5tgs.2
    Explore at:
    Dataset updated
    Feb 4, 2020
    Authors
    Pranav Pandya
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    It has never been easier to solve any database related problem using any sequel language and the following gives an opportunity for you guys to understand how I was able to figure out some of the interline relationships between databases using Panoply.io tool.

    I was able to insert coronavirus dataset and create a submittable, reusable result. I hope it helps you work in Data Warehouse environment.

    The following is list of SQL commands performed on dataset attached below with the final output as stored in Exports Folder QUERY 1 SELECT "Province/State" As "Region", Deaths, Recovered, Confirmed FROM "public"."coronavirus_updated" WHERE Recovered>(Deaths/2) AND Deaths>0 Description: How will we estimate where Coronavirus has infiltrated, but there is effective recovery amongst patients? We can view those places by having Recovery twice more than the Death Toll.

    Query 2 SELECT country, sum(confirmed) as "Confirmed Count", sum(Recovered) as "Recovered Count", sum(Deaths) as "Death Toll" FROM "public"."coronavirus_updated" WHERE Recovered>(Deaths/2) AND Confirmed>0 GROUP BY country

    Description: Coronavirus Epidemic has infiltrated multiple countries, and the only way to be safe is by knowing the countries which have confirmed Coronavirus Cases. So here is a list of those countries

    Query 3 SELECT country as "Countries where Coronavirus has reached" FROM "public"."coronavirus_updated" WHERE confirmed>0 GROUP BY country Description: Coronavirus Epidemic has infiltrated multiple countries, and the only way to be safe is by knowing the countries which have confirmed Coronavirus Cases. So here is a list of those countries.

    Query 4 SELECT country, sum(suspected) as "Suspected Cases under potential CoronaVirus outbreak" FROM "public"."coronavirus_updated" WHERE suspected>0 AND deaths=0 AND confirmed=0 GROUP BY country ORDER BY sum(suspected) DESC

    Description: Coronavirus is spreading at alarming rate. In order to know which countries are newly getting the virus is important because in these countries if timely measures are taken, it could prevent any causalities. Here is a list of suspected cases with no virus resulted deaths.

    Query 5 SELECT country, sum(suspected) as "Coronavirus uncontrolled spread count and human life loss", 100*sum(suspected)/(SELECT sum((suspected)) FROM "public"."coronavirus_updated") as "Global suspected Exposure of Coronavirus in percentage" FROM "public"."coronavirus_updated" WHERE suspected>0 AND deaths=0 GROUP BY country ORDER BY sum(suspected) DESC Description: Coronavirus is getting stronger in particular countries, but how will we measure that? We can measure it by knowing the percentage of suspected patients amongst countries which still doesn’t have any Coronavirus related deaths. The following is a list.

    Data Provided by: SRK, Data Scientist at H2O.ai, Chennai, India

  13. Z

    Measures to mitigate the spread of COVID-19 in Switzerland

    • data.niaid.nih.gov
    Updated Apr 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maria Bekker-Nielsen Dunbar (2020). Measures to mitigate the spread of COVID-19 in Switzerland [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_3749746
    Explore at:
    Dataset updated
    Apr 14, 2020
    Dataset provided by
    Nicolo Lardelli
    Simone Baffelli
    Fabienne Krauer
    Muriel Buri
    Johannes Bracher
    Jonas Oesch
    Maria Bekker-Nielsen Dunbar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Switzerland
    Description

    Since February 25, 2020 Switzerland has been affected by COVID-19. Modelling predictions show that this pandemic will not stop on its own and that stringent migitation strategies are needed. Switzerland has implemented a series of measures both at cantonal and federal level. On March 16, 2020 the Federal Council of Switzerland declared “extraordinary situation” and introduced a series of stringent measures. This includes the closure of schools, restaurants, bars, businesses with close contact (e.g. hair dressers), entertainment or leisure facilities. Incoming cross-border mobility from specific countries is also restricted to Swiss citizens, residency holders or work commuters. As of March 20, 2020 mass gatherings of more than five people are also banned. Already in early March various cantons had started to ban events of various sizes and have restricted or banned access to short- and long-term care facilites and day care centers.

    The aim of this project is to collect and categorize these control measures implemented and provide a continously updated data set, which can be used for modelling or visualization purposes. Please use the newest version available.

    We collect the date/duration and level of the most important measures taken in response to COVID-19 from official cantonal and federal press releases. A description of the measures, the levels as well as the newest version of data dataset can be found here.

  14. TROPESS Chemical Reanalysis NO2 Spread 6-Hourly 3-dimensional Product V1...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA/GSFC/SED/ESD/TISL/GESDISC (2025). TROPESS Chemical Reanalysis NO2 Spread 6-Hourly 3-dimensional Product V1 (TRPSCRNO2S6H3D) at GES DISC [Dataset]. https://catalog.data.gov/dataset/tropess-chemical-reanalysis-no2-spread-6-hourly-3-dimensional-product-v1-trpscrno2s6h3d-at-85c38
    Explore at:
    Dataset updated
    Jul 3, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The TROPESS Chemical Reanalysis NO2 Spread 6-Hourly 3-dimensional Product contains the nitrogen dioxide ensemble spread, a measure of data assimilation analysis uncertainty. The data are part of the Tropospheric Chemical Reanalysis v2 (TCR-2) for the period 2005-2021. TCR-2 uses JPL's Multi-mOdel Multi-cOnstituent Chemical (MOMO-Chem) data assimilation framework that simultaneously optimizes both concentrations and emissions of multiple species from multiple satellite sensors.The data files are written in the netCDF version 4 file format, and each file contains a year of data at 6-hourly resolution, and a spatial resolution of 1.125 x 1.125 degrees at 27 pressure levels between 1000 and 60 hPa. The principal investigator for the TCR-2 data is Miyazaki, Kazuyuki.

  15. TROPESS Chemical Reanalysis Ozone Spread Monthly 3-dimensional Product V1...

    • catalog.data.gov
    • s.cnmilf.com
    • +1more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA/GSFC/SED/ESD/GCDC/GESDISC (2025). TROPESS Chemical Reanalysis Ozone Spread Monthly 3-dimensional Product V1 (TRPSCRO3SM3D) at GES DISC [Dataset]. https://catalog.data.gov/dataset/tropess-chemical-reanalysis-ozone-spread-monthly-3-dimensional-product-v1-trpscro3sm3d-at-
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The TROPESS Chemical Reanalysis O3 Spread Monthly 3-dimensional Product contains the ozone ensemble spread, a measure of data assimilation analysis uncertainty. The data are part of the Tropospheric Chemical Reanalysis v2 (TCR-2) for the period 2005-2021. TCR-2 uses JPL's Multi-mOdel Multi-cOnstituent Chemical (MOMO-Chem) data assimilation framework that simultaneously optimizes both concentrations and emissions of multiple species from multiple satellite sensors. The data files are written in the netCDF version 4 file format, and each file contains a year of data at monthly resolution, and a spatial resolution of 1.125 x 1.125 degrees at 27 pressure levels between 1000 and 60 hPa. The principal investigator for the TCR-2 data is Miyazaki, Kazuyuki.

  16. Datasets for the paper: Lost in Translation: Using Global Fact-Checks to...

    • zenodo.org
    application/gzip, bin
    Updated May 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dorian Quelle; Dorian Quelle (2024). Datasets for the paper: Lost in Translation: Using Global Fact-Checks to Measure Multilingual Misinformation Prevalence, Spread, and Evolution [Dataset]. http://doi.org/10.5281/zenodo.11098780
    Explore at:
    application/gzip, binAvailable download formats
    Dataset updated
    May 1, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dorian Quelle; Dorian Quelle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    FullData.csv.gz: Contains links to all claims in the data-set.

    • publishing_date: Date on which the fact-check was published.
    • claim_date: Date that claim was made.
    • verdict: Rating given by the fact-checking organisation.
    • language: Language of the claim.
    • cluster_{threshold}: ID of the cluster that claim belongs to at all given clusters. Entry "0" means that claim is singleton and not clustered with any other claims.

    Embeddings.npy: Contains a dictionary linking each claim to it's embedding calculated with LaBSE.

  17. DC3 In-Situ DC-8 Aircraft Trace Gas Data - Dataset - NASA Open Data Portal

    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    • data.nasa.gov
    Updated Aug 4, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). DC3 In-Situ DC-8 Aircraft Trace Gas Data - Dataset - NASA Open Data Portal [Dataset]. https://data.staging.idas-ds1.appdat.jsc.nasa.gov/dataset/dc3-in-situ-dc-8-aircraft-trace-gas-data-9d23f
    Explore at:
    Dataset updated
    Aug 4, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    DC3_TraceGas_AircraftInSitu_DC8_Data are in-situ trace gas data collected onboard the DC-8 aircraft during the Deep Convective Clouds and Chemistry (DC3) field campaign. Data collection for this product is complete.The Deep Convective Clouds and Chemistry (DC3) field campaign sought to understand the dynamical, physical, and lightning processes of deep, mid-latitude continental convective clouds and to define the impact of these clouds on upper tropospheric composition and chemistry. DC3 was conducted from May to June 2012 with a base location of Salina, Kansas. Observations were conducted in northeastern Colorado, west Texas to central Oklahoma, and northern Alabama in order to provide a wide geographic sample of storm types and boundary layer compositions, as well as to sample convection.DC3 had two primary science objectives. The first was to investigate storm dynamics and physics, lightning and its production of nitrogen oxides, cloud hydrometeor effects on wet deposition of species, surface emission variability, and chemistry in anvil clouds. Observations related to this objective focused on the early stages of active convection. The second objective was to investigate changes in upper tropospheric chemistry and composition after active convection. Observations related to this objective focused on the 12-48 hours following convection. This objective also served to explore seasonal change of upper tropospheric chemistry.In addition to using the NSF/NCAR Gulfstream-V (GV) aircraft, the NASA DC-8 was used during DC3 to provide in-situ measurements of the convective storm inflow and remotely-sensed measurements used for flight planning and column characterization. DC3 utilized ground-based radar networks spread across its observation area to measure the physical and kinematic characteristics of storms. Additional sampling strategies relied on lightning mapping arrays, radiosondes, and precipitation collection. Lastly, DC3 used data collected from various satellite instruments to achieve its goals, focusing on measurements from CALIOP onboard CALIPSO and CPL onboard CloudSat. In addition to providing an extensive set of data related to deep, mid-latitude continental convective clouds and analyzing their impacts on upper tropospheric composition and chemistry, DC3 improved models used to predict convective transport. DC3 improved knowledge of convection and chemistry, and provided information necessary to understanding the processes relating to ozone in the upper troposphere.

  18. u

    Community-based measures to mitigate the spread of coronavirus disease...

    • beta.data.urbandatacentre.ca
    • data.urbandatacentre.ca
    • +1more
    Updated Sep 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Community-based measures to mitigate the spread of coronavirus disease (COVID-19) in Canada [Dataset]. https://beta.data.urbandatacentre.ca/dataset/gov-canada-ca81fcd4-8da8-4816-9a6e-d233e491f71d
    Explore at:
    Dataset updated
    Sep 13, 2024
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Area covered
    Canada
    Description

    The guidance identifies core personal and community-based public health measures to mitigate the transmission of coronavirus disease (COVID-19).

  19. DC3 Miscellaneous NSF/NCAR GV-HIAPER Data - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). DC3 Miscellaneous NSF/NCAR GV-HIAPER Data - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/dc3-miscellaneous-nsf-ncar-gv-hiaper-data-270ca
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    DC3_Miscellaneous_NSF-GV-HIAPER_Data are miscellaneous data collected onboard the DC-8 aircraft during the Deep Convective Clouds and Chemistry (DC3) field campaign. This product features data from the Global Forecast System (GFS) model. Data collection for this product is complete.The Deep Convective Clouds and Chemistry (DC3) field campaign sought to understand the dynamical, physical, and lightning processes of deep, mid-latitude continental convective clouds and to define the impact of these clouds on upper tropospheric composition and chemistry. DC3 was conducted from May to June 2012 with a base location of Salina, Kansas. Observations were conducted in northeastern Colorado, west Texas to central Oklahoma, and northern Alabama in order to provide a wide geographic sample of storm types and boundary layer compositions, as well as to sample convection.DC3 had two primary science objectives. The first was to investigate storm dynamics and physics, lightning and its production of nitrogen oxides, cloud hydrometeor effects on wet deposition of species, surface emission variability, and chemistry in anvil clouds. Observations related to this objective focused on the early stages of active convection. The second objective was to investigate changes in upper tropospheric chemistry and composition after active convection. Observations related to this objective focused on the 12-48 hours following convection. This objective also served to explore seasonal change of upper tropospheric chemistry.In addition to using the NSF/NCAR Gulfstream-V (GV) aircraft, the NASA DC-8 was used during DC3 to provide in-situ measurements of the convective storm inflow and remotely-sensed measurements used for flight planning and column characterization. DC3 utilized ground-based radar networks spread across its observation area to measure the physical and kinematic characteristics of storms. Additional sampling strategies relied on lightning mapping arrays, radiosondes, and precipitation collection. Lastly, DC3 used data collected from various satellite instruments to achieve its goals, focusing on measurements from CALIOP onboard CALIPSO and CPL onboard CloudSat. In addition to providing an extensive set of data related to deep, mid-latitude continental convective clouds and analyzing their impacts on upper tropospheric composition and chemistry, DC3 improved models used to predict convective transport. DC3 improved knowledge of convection and chemistry, and provided information necessary to understanding the processes relating to ozone in the upper troposphere.

  20. Genomic Typing, Antimicrobial Resistance Gene, Virulence Factor and Plasmid...

    • zenodo.org
    bin, pdf, txt
    Updated Jan 8, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrey Shelenkov; Andrey Shelenkov (2025). Genomic Typing, Antimicrobial Resistance Gene, Virulence Factor and Plasmid Replicon Dataset for the Important Pathogenic Bacteria Klebsiella pneumoniae [Dataset]. http://doi.org/10.5281/zenodo.14232547
    Explore at:
    bin, txt, pdfAvailable download formats
    Dataset updated
    Jan 8, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrey Shelenkov; Andrey Shelenkov
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Time period covered
    Apr 26, 2024
    Description

    The infections caused by various bacterial pathogens both in clinical and community settings represent a significant threat to public healthcare worldwide. The growing resistance to antimicrobial drugs acquired by bacterial species causing healthcare-associated infections has already become a life-threatening danger noticed by the World Health Organization. Several groups or lineages of bacterial isolates usually called 'the clones of high risk' often drive the spread of resistance within particular species.

    Thus, it is vitally important to reveal and track the spread of such clones and the mechanisms by which they acquire antibiotic resistance and enhance their survival skills. Currently, the analysis of whole genome sequences for bacterial isolates of interest is increasingly used for these purposes, including epidemiological surveillance and developing of spread prevention measures. However, the availability and uniformity of the data derived from the genomic sequences often represents a bottleneck for such investigations.

    In this dataset, we present the results of a genomic epidemiology analysis of 61,857 genomes of a dangerous bacterial pathogen Klebsiella pneumoniae obtained from NCBI Genbank database. Important typing information including multilocus sequence typing (MLST)-based sequence types (STs), capsular (KL) and oligosaccharide (OL) types, CRISPR-Cas systems, and cgMLST profiles are presented, as well as the assignment of particular isolates to clonal groups (CG). The presence of antimicrobial resistance and virulence genes, as well as plasmid replicons, within the genomes is also reported.

    These data will be useful for researchers in the field of K. pneumoniae genomic epidemiology, resistance analysis and prevention measure development.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Peter Kasson; Peter Kasson; S.C.L. Kamerlin; S.C.L. Kamerlin (2020). dataset to accompany Managing COVID-19 spread with voluntary public-health measures: Sweden as a case study for pandemic control [Dataset]. http://doi.org/10.5281/zenodo.3836195
Organization logo

dataset to accompany Managing COVID-19 spread with voluntary public-health measures: Sweden as a case study for pandemic control

Explore at:
binAvailable download formats
Dataset updated
Jun 18, 2020
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Peter Kasson; Peter Kasson; S.C.L. Kamerlin; S.C.L. Kamerlin
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Area covered
Sweden
Description

Data from simulations of COVID-19 spread in Sweden under different public-health measures. Results from individual-based models.

Search
Clear search
Close search
Google apps
Main menu