32 datasets found
  1. d

    COVID-19 Cases and Deaths by Age Group - ARCHIVE

    • catalog.data.gov
    • data.ct.gov
    • +1more
    Updated Aug 12, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.ct.gov (2023). COVID-19 Cases and Deaths by Age Group - ARCHIVE [Dataset]. https://catalog.data.gov/dataset/covid-19-cases-and-deaths-by-age-group
    Explore at:
    Dataset updated
    Aug 12, 2023
    Dataset provided by
    data.ct.gov
    Description

    Note: DPH is updating and streamlining the COVID-19 cases, deaths, and testing data. As of 6/27/2022, the data will be published in four tables instead of twelve. The COVID-19 Cases, Deaths, and Tests by Day dataset contains cases and test data by date of sample submission. The death data are by date of death. This dataset is updated daily and contains information back to the beginning of the pandemic. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Cases-Deaths-and-Tests-by-Day/g9vi-2ahj. The COVID-19 State Metrics dataset contains over 93 columns of data. This dataset is updated daily and currently contains information starting June 21, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-State-Level-Data/qmgw-5kp6 . The COVID-19 County Metrics dataset contains 25 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-County-Level-Data/ujiq-dy22 . The COVID-19 Town Metrics dataset contains 16 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Town-Level-Data/icxw-cada . To protect confidentiality, if a town has fewer than 5 cases or positive NAAT tests over the past 7 days, those data will be suppressed. COVID-19 cases and associated deaths that have been reported among Connecticut residents, broken out by age group. All data in this report are preliminary; data for previous dates will be updated as new reports are received and data errors are corrected. Deaths reported to the either the Office of the Chief Medical Examiner (OCME) or Department of Public Health (DPH) are included in the daily COVID-19 update. Data are reported daily, with timestamps indicated in the daily briefings posted at: portal.ct.gov/coronavirus. Data are subject to future revision as reporting changes. Starting in July 2020, this dataset will be updated every weekday. Additional notes: A delay in the data pull schedule occurred on 06/23/2020. Data from 06/22/2020 was processed on 06/23/2020 at 3:30 PM. The normal data cycle resumed with the data for 06/23/2020. A network outage on 05/19/2020 resulted in a change in the data pull schedule. Data from 5/19/2020 was processed on 05/20/2020 at 12:00 PM. Data from 5/20/2020 was processed on 5/20/2020 8:30 PM. The normal data cycle resumed on 05/20/2020 with the 8:30 PM data pull. As a result of the network outage, the timestamp on the datasets on the Open Data Portal differ from the timestamp in DPH's daily PDF reports. Starting 5/10/2021, the date field will represent the date this data was updated on data.ct.gov. Previously the date the data was pulled by DPH was listed, which typically coincided with the date before the data was published on data.ct.gov. This change was made to standardize the COVID-19 data sets on data.ct.gov.

  2. d

    Johns Hopkins COVID-19 Case Tracker

    • data.world
    csv, zip
    Updated Mar 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Associated Press (2025). Johns Hopkins COVID-19 Case Tracker [Dataset]. https://data.world/associatedpress/johns-hopkins-coronavirus-case-tracker
    Explore at:
    zip, csvAvailable download formats
    Dataset updated
    Mar 25, 2025
    Authors
    The Associated Press
    Time period covered
    Jan 22, 2020 - Mar 9, 2023
    Area covered
    Description

    Updates

    • Notice of data discontinuation: Since the start of the pandemic, AP has reported case and death counts from data provided by Johns Hopkins University. Johns Hopkins University has announced that they will stop their daily data collection efforts after March 10. As Johns Hopkins stops providing data, the AP will also stop collecting daily numbers for COVID cases and deaths. The HHS and CDC now collect and visualize key metrics for the pandemic. AP advises using those resources when reporting on the pandemic going forward.

    • April 9, 2020

      • The population estimate data for New York County, NY has been updated to include all five New York City counties (Kings County, Queens County, Bronx County, Richmond County and New York County). This has been done to match the Johns Hopkins COVID-19 data, which aggregates counts for the five New York City counties to New York County.
    • April 20, 2020

      • Johns Hopkins death totals in the US now include confirmed and probable deaths in accordance with CDC guidelines as of April 14. One significant result of this change was an increase of more than 3,700 deaths in the New York City count. This change will likely result in increases for death counts elsewhere as well. The AP does not alter the Johns Hopkins source data, so probable deaths are included in this dataset as well.
    • April 29, 2020

      • The AP is now providing timeseries data for counts of COVID-19 cases and deaths. The raw counts are provided here unaltered, along with a population column with Census ACS-5 estimates and calculated daily case and death rates per 100,000 people. Please read the updated caveats section for more information.
    • September 1st, 2020

      • Johns Hopkins is now providing counts for the five New York City counties individually.
    • February 12, 2021

      • The Ohio Department of Health recently announced that as many as 4,000 COVID-19 deaths may have been underreported through the state’s reporting system, and that the "daily reported death counts will be high for a two to three-day period."
      • Because deaths data will be anomalous for consecutive days, we have chosen to freeze Ohio's rolling average for daily deaths at the last valid measure until Johns Hopkins is able to back-distribute the data. The raw daily death counts, as reported by Johns Hopkins and including the backlogged death data, will still be present in the new_deaths column.
    • February 16, 2021

      - Johns Hopkins has reconciled Ohio's historical deaths data with the state.

      Overview

    The AP is using data collected by the Johns Hopkins University Center for Systems Science and Engineering as our source for outbreak caseloads and death counts for the United States and globally.

    The Hopkins data is available at the county level in the United States. The AP has paired this data with population figures and county rural/urban designations, and has calculated caseload and death rates per 100,000 people. Be aware that caseloads may reflect the availability of tests -- and the ability to turn around test results quickly -- rather than actual disease spread or true infection rates.

    This data is from the Hopkins dashboard that is updated regularly throughout the day. Like all organizations dealing with data, Hopkins is constantly refining and cleaning up their feed, so there may be brief moments where data does not appear correctly. At this link, you’ll find the Hopkins daily data reports, and a clean version of their feed.

    The AP is updating this dataset hourly at 45 minutes past the hour.

    To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.

    Queries

    Use AP's queries to filter the data or to join to other datasets we've made available to help cover the coronavirus pandemic

    Interactive

    The AP has designed an interactive map to track COVID-19 cases reported by Johns Hopkins.

    @(https://datawrapper.dwcdn.net/nRyaf/15/)

    Interactive Embed Code

    <iframe title="USA counties (2018) choropleth map Mapping COVID-19 cases by county" aria-describedby="" id="datawrapper-chart-nRyaf" src="https://datawrapper.dwcdn.net/nRyaf/10/" scrolling="no" frameborder="0" style="width: 0; min-width: 100% !important;" height="400"></iframe><script type="text/javascript">(function() {'use strict';window.addEventListener('message', function(event) {if (typeof event.data['datawrapper-height'] !== 'undefined') {for (var chartId in event.data['datawrapper-height']) {var iframe = document.getElementById('datawrapper-chart-' + chartId) || document.querySelector("iframe[src*='" + chartId + "']");if (!iframe) {continue;}iframe.style.height = event.data['datawrapper-height'][chartId] + 'px';}}});})();</script>
    

    Caveats

    • This data represents the number of cases and deaths reported by each state and has been collected by Johns Hopkins from a number of sources cited on their website.
    • In some cases, deaths or cases of people who've crossed state lines -- either to receive treatment or because they became sick and couldn't return home while traveling -- are reported in a state they aren't currently in, because of state reporting rules.
    • In some states, there are a number of cases not assigned to a specific county -- for those cases, the county name is "unassigned to a single county"
    • This data should be credited to Johns Hopkins University's COVID-19 tracking project. The AP is simply making it available here for ease of use for reporters and members.
    • Caseloads may reflect the availability of tests -- and the ability to turn around test results quickly -- rather than actual disease spread or true infection rates.
    • Population estimates at the county level are drawn from 2014-18 5-year estimates from the American Community Survey.
    • The Urban/Rural classification scheme is from the Center for Disease Control and Preventions's National Center for Health Statistics. It puts each county into one of six categories -- from Large Central Metro to Non-Core -- according to population and other characteristics. More details about the classifications can be found here.

    Johns Hopkins timeseries data - Johns Hopkins pulls data regularly to update their dashboard. Once a day, around 8pm EDT, Johns Hopkins adds the counts for all areas they cover to the timeseries file. These counts are snapshots of the latest cumulative counts provided by the source on that day. This can lead to inconsistencies if a source updates their historical data for accuracy, either increasing or decreasing the latest cumulative count. - Johns Hopkins periodically edits their historical timeseries data for accuracy. They provide a file documenting all errors in their timeseries files that they have identified and fixed here

    Attribution

    This data should be credited to Johns Hopkins University COVID-19 tracking project

  3. O

    COVID-19 Tests, Cases, Hospitalizations, and Deaths (Statewide) - ARCHIVE

    • data.ct.gov
    • gimi9.com
    • +1more
    application/rdfxml +5
    Updated Jun 24, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of Public Health (2022). COVID-19 Tests, Cases, Hospitalizations, and Deaths (Statewide) - ARCHIVE [Dataset]. https://data.ct.gov/Health-and-Human-Services/COVID-19-Tests-Cases-Hospitalizations-and-Deaths-S/rf3k-f8fg
    Explore at:
    tsv, application/rdfxml, xml, json, csv, application/rssxmlAvailable download formats
    Dataset updated
    Jun 24, 2022
    Dataset authored and provided by
    Department of Public Health
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Note: DPH is updating and streamlining the COVID-19 cases, deaths, and testing data. As of 6/27/2022, the data will be published in four tables instead of twelve.

    The COVID-19 Cases, Deaths, and Tests by Day dataset contains cases and test data by date of sample submission. The death data are by date of death. This dataset is updated daily and contains information back to the beginning of the pandemic. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Cases-Deaths-and-Tests-by-Day/g9vi-2ahj.

    The COVID-19 State Metrics dataset contains over 93 columns of data. This dataset is updated daily and currently contains information starting June 21, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-State-Level-Data/qmgw-5kp6 .

    The COVID-19 County Metrics dataset contains 25 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-County-Level-Data/ujiq-dy22 .

    The COVID-19 Town Metrics dataset contains 16 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Town-Level-Data/icxw-cada . To protect confidentiality, if a town has fewer than 5 cases or positive NAAT tests over the past 7 days, those data will be suppressed.

    COVID-19 tests, cases, and associated deaths that have been reported among Connecticut residents. All data in this report are preliminary; data for previous dates will be updated as new reports are received and data errors are corrected. Hospitalization data were collected by the Connecticut Hospital Association and reflect the number of patients currently hospitalized with laboratory-confirmed COVID-19. Deaths reported to the either the Office of the Chief Medical Examiner (OCME) or Department of Public Health (DPH) are included in the daily COVID-19 update.

    Data on Connecticut deaths were obtained from the Connecticut Deaths Registry maintained by the DPH Office of Vital Records. Cause of death was determined by a death certifier (e.g., physician, APRN, medical examiner) using their best clinical judgment. Additionally, all COVID-19 deaths, including suspected or related, are required to be reported to OCME. On April 4, 2020, CT DPH and OCME released a joint memo to providers and facilities within Connecticut providing guidelines for certifying deaths due to COVID-19 that were consistent with the CDC’s guidelines and a reminder of the required reporting to OCME.25,26 As of July 1, 2021, OCME had reviewed every case reported and performed additional investigation on about one-third of reported deaths to better ascertain if COVID-19 did or did not cause or contribute to the death. Some of these investigations resulted in the OCME performing postmortem swabs for PCR testing on individuals whose deaths were suspected to be due to COVID-19, but antemortem diagnosis was unable to be made.31 The OCME issued or re-issued about 10% of COVID-19 death certificates and, when appropriate, removed COVID-19 from the death certificate. For standardization and tabulation of mortality statistics, written cause of death statements made by the certifiers on death certificates are sent to the National Center for Health Statistics (NCHS) at the CDC which assigns cause of death codes according to the International Causes of Disease 10th Revision (ICD-10) classification system.25,26 COVID-19 deaths in this report are defined as those for which the death certificate has an ICD-10 code of U07.1 as either a primary (underlying) or a contributing cause of death. More information on COVID-19 mortality can be found at the following link: https://portal.ct.gov/DPH/Health-Information-Systems--Reporting/Mortality/Mortality-Statistics

    Data are reported daily, with timestamps indicated in the daily briefings posted at: portal.ct.gov/coronavirus. Data are subject to future revision as reporting changes.

    Starting in July 2020, this dataset will be updated every weekday.

    Additional notes: As of 11/5/2020, CT DPH has added antigen testing for SARS-CoV-2 to reported test counts in this dataset. The tests included in this dataset include both molecular and antigen datasets. Molecular tests reported include polymerase chain reaction (PCR) and nucleic acid amplicfication (NAAT) tests.

    A delay in the data pull schedule occurred on 06/23/2020. Data from 06/22/2020 was processed on 06/23/2020 at 3:30 PM. The normal data cycle resumed with the data for 06/23/2020.

    A network outage on 05/19/2020 resulted in a change in the data pull schedule. Data from 5/19/2020 was processed on 05/20/2020 at 12:00 PM. Data from 5/20/2020 was processed on 5/20/2020 8:30 PM. The normal data cycle resumed on 05/20/2020 with the 8:30 PM data pull. As a result of the network outage, the timestamp on the datasets on the Open Data Portal differ from the timestamp in DPH's daily PDF reports.

    Starting 5/10/2021, the date field will represent the date this data was updated on data.ct.gov. Previously the date the data was pulled by DPH was listed, which typically coincided with the date before the data was published on data.ct.gov. This change was made to standardize the COVID-19 data sets on data.ct.gov.

    Starting April 4, 2022, negative rapid antigen and rapid PCR test results for SARS-CoV-2 are no longer required to be reported to the Connecticut Department of Public Health as of April 4. Negative test results from laboratory based molecular (PCR/NAAT) results are still required to be reported as are all positive test results from both molecular (PCR/NAAT) and antigen tests.

    On 5/16/2022, 8,622 historical cases were included in the data. The date range for these cases were from August 2021 – April 2022.”

  4. Z

    INTRODUCTION OF COVID-NEWS-US-NNK AND COVID-NEWS-BD-NNK DATASET

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nafiz Sadman (2024). INTRODUCTION OF COVID-NEWS-US-NNK AND COVID-NEWS-BD-NNK DATASET [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4047647
    Explore at:
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Kishor Datta Gupta
    Nafiz Sadman
    Nishat Anjum
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Bangladesh, United States
    Description

    Introduction

    There are several works based on Natural Language Processing on newspaper reports. Mining opinions from headlines [ 1 ] using Standford NLP and SVM by Rameshbhaiet. Al.compared several algorithms on a small and large dataset. Rubinet. al., in their paper [ 2 ], created a mechanism to differentiate fake news from real ones by building a set of characteristics of news according to their types. The purpose was to contribute to the low resource data available for training machine learning algorithms. Doumitet. al.in [ 3 ] have implemented LDA, a topic modeling approach to study bias present in online news media.

    However, there are not many NLP research invested in studying COVID-19. Most applications include classification of chest X-rays and CT-scans to detect presence of pneumonia in lungs [ 4 ], a consequence of the virus. Other research areas include studying the genome sequence of the virus[ 5 ][ 6 ][ 7 ] and replicating its structure to fight and find a vaccine. This research is crucial in battling the pandemic. The few NLP based research publications are sentiment classification of online tweets by Samuel et el [ 8 ] to understand fear persisting in people due to the virus. Similar work has been done using the LSTM network to classify sentiments from online discussion forums by Jelodaret. al.[ 9 ]. NKK dataset is the first study on a comparatively larger dataset of a newspaper report on COVID-19, which contributed to the virus’s awareness to the best of our knowledge.

    2 Data-set Introduction

    2.1 Data Collection

    We accumulated 1000 online newspaper report from United States of America (USA) on COVID-19. The newspaper includes The Washington Post (USA) and StarTribune (USA). We have named it as “Covid-News-USA-NNK”. We also accumulated 50 online newspaper report from Bangladesh on the issue and named it “Covid-News-BD-NNK”. The newspaper includes The Daily Star (BD) and Prothom Alo (BD). All these newspapers are from the top provider and top read in the respective countries. The collection was done manually by 10 human data-collectors of age group 23- with university degrees. This approach was suitable compared to automation to ensure the news were highly relevant to the subject. The newspaper online sites had dynamic content with advertisements in no particular order. Therefore there were high chances of online scrappers to collect inaccurate news reports. One of the challenges while collecting the data is the requirement of subscription. Each newspaper required $1 per subscriptions. Some criteria in collecting the news reports provided as guideline to the human data-collectors were as follows:

    The headline must have one or more words directly or indirectly related to COVID-19.

    The content of each news must have 5 or more keywords directly or indirectly related to COVID-19.

    The genre of the news can be anything as long as it is relevant to the topic. Political, social, economical genres are to be more prioritized.

    Avoid taking duplicate reports.

    Maintain a time frame for the above mentioned newspapers.

    To collect these data we used a google form for USA and BD. We have two human editor to go through each entry to check any spam or troll entry.

    2.2 Data Pre-processing and Statistics

    Some pre-processing steps performed on the newspaper report dataset are as follows:

    Remove hyperlinks.

    Remove non-English alphanumeric characters.

    Remove stop words.

    Lemmatize text.

    While more pre-processing could have been applied, we tried to keep the data as much unchanged as possible since changing sentence structures could result us in valuable information loss. While this was done with help of a script, we also assigned same human collectors to cross check for any presence of the above mentioned criteria.

    The primary data statistics of the two dataset are shown in Table 1 and 2.

    Table 1: Covid-News-USA-NNK data statistics

    No of words per headline

    7 to 20

    No of words per body content

    150 to 2100

    Table 2: Covid-News-BD-NNK data statistics No of words per headline

    10 to 20

    No of words per body content

    100 to 1500

    2.3 Dataset Repository

    We used GitHub as our primary data repository in account name NKK^1. Here, we created two repositories USA-NKK^2 and BD-NNK^3. The dataset is available in both CSV and JSON format. We are regularly updating the CSV files and regenerating JSON using a py script. We provided a python script file for essential operation. We welcome all outside collaboration to enrich the dataset.

    3 Literature Review

    Natural Language Processing (NLP) deals with text (also known as categorical) data in computer science, utilizing numerous diverse methods like one-hot encoding, word embedding, etc., that transform text to machine language, which can be fed to multiple machine learning and deep learning algorithms.

    Some well-known applications of NLP includes fraud detection on online media sites[ 10 ], using authorship attribution in fallback authentication systems[ 11 ], intelligent conversational agents or chatbots[ 12 ] and machine translations used by Google Translate[ 13 ]. While these are all downstream tasks, several exciting developments have been made in the algorithm solely for Natural Language Processing tasks. The two most trending ones are BERT[ 14 ], which uses bidirectional encoder-decoder architecture to create the transformer model, that can do near-perfect classification tasks and next-word predictions for next generations, and GPT-3 models released by OpenAI[ 15 ] that can generate texts almost human-like. However, these are all pre-trained models since they carry huge computation cost. Information Extraction is a generalized concept of retrieving information from a dataset. Information extraction from an image could be retrieving vital feature spaces or targeted portions of an image; information extraction from speech could be retrieving information about names, places, etc[ 16 ]. Information extraction in texts could be identifying named entities and locations or essential data. Topic modeling is a sub-task of NLP and also a process of information extraction. It clusters words and phrases of the same context together into groups. Topic modeling is an unsupervised learning method that gives us a brief idea about a set of text. One commonly used topic modeling is Latent Dirichlet Allocation or LDA[17].

    Keyword extraction is a process of information extraction and sub-task of NLP to extract essential words and phrases from a text. TextRank [ 18 ] is an efficient keyword extraction technique that uses graphs to calculate the weight of each word and pick the words with more weight to it.

    Word clouds are a great visualization technique to understand the overall ’talk of the topic’. The clustered words give us a quick understanding of the content.

    4 Our experiments and Result analysis

    We used the wordcloud library^4 to create the word clouds. Figure 1 and 3 presents the word cloud of Covid-News-USA- NNK dataset by month from February to May. From the figures 1,2,3, we can point few information:

    In February, both the news paper have talked about China and source of the outbreak.

    StarTribune emphasized on Minnesota as the most concerned state. In April, it seemed to have been concerned more.

    Both the newspaper talked about the virus impacting the economy, i.e, bank, elections, administrations, markets.

    Washington Post discussed global issues more than StarTribune.

    StarTribune in February mentioned the first precautionary measurement: wearing masks, and the uncontrollable spread of the virus throughout the nation.

    While both the newspaper mentioned the outbreak in China in February, the weight of the spread in the United States are more highlighted through out March till May, displaying the critical impact caused by the virus.

    We used a script to extract all numbers related to certain keywords like ’Deaths’, ’Infected’, ’Died’ , ’Infections’, ’Quarantined’, Lock-down’, ’Diagnosed’ etc from the news reports and created a number of cases for both the newspaper. Figure 4 shows the statistics of this series. From this extraction technique, we can observe that April was the peak month for the covid cases as it gradually rose from February. Both the newspaper clearly shows us that the rise in covid cases from February to March was slower than the rise from March to April. This is an important indicator of possible recklessness in preparations to battle the virus. However, the steep fall from April to May also shows the positive response against the attack. We used Vader Sentiment Analysis to extract sentiment of the headlines and the body. On average, the sentiments were from -0.5 to -0.9. Vader Sentiment scale ranges from -1(highly negative to 1(highly positive). There were some cases

    where the sentiment scores of the headline and body contradicted each other,i.e., the sentiment of the headline was negative but the sentiment of the body was slightly positive. Overall, sentiment analysis can assist us sort the most concerning (most negative) news from the positive ones, from which we can learn more about the indicators related to COVID-19 and the serious impact caused by it. Moreover, sentiment analysis can also provide us information about how a state or country is reacting to the pandemic. We used PageRank algorithm to extract keywords from headlines as well as the body content. PageRank efficiently highlights important relevant keywords in the text. Some frequently occurring important keywords extracted from both the datasets are: ’China’, Government’, ’Masks’, ’Economy’, ’Crisis’, ’Theft’ , ’Stock market’ , ’Jobs’ , ’Election’, ’Missteps’, ’Health’, ’Response’. Keywords extraction acts as a filter allowing quick searches for indicators in case of locating situations of the economy,

  5. Z

    COVID-19 Press Briefings Corpus

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    COVID-19 Press Briefings Corpus [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3872416
    Explore at:
    Dataset updated
    Jun 2, 2020
    Dataset authored and provided by
    Chatsiou, Kakia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Coronavirus (COVID-19) Press Briefings Corpus is a work in progress to collect and present in a machine readable text dataset of the daily briefings from around the world by government authorities. During the peak of the pandemic, most countries around the world informed their citizens of the status of the pandemic (usually involving an update on the number of infection cases, number of deaths) and other policy-oriented decisions about dealing with the health crisis, such as advice about what to do to reduce the spread of the epidemic.

    Usually daily briefings did not occur on a Sunday.

    At the moment the dataset includes:

    UK/England: Daily Press Briefings by UK Government between 12 March 2020 - 01 June 2020 (70 briefings in total)

    Scotland: Daily Press Briefings by Scottish Government between 3 March 2020 - 01 June 2020 (76 briefings in total)

    Wales: Daily Press Briefings by Welsh Government between 23 March 2020 - 01 June 2020 (56 briefings in total)

    Northern Ireland: Daily Press Briefings by N. Ireland Assembly between 23 March 2020 - 01 June 2020 (56 briefings in total)

    World Health Organisation: Press Briefings occuring usually every 2 days between 22 January 2020 - 01 June 2020 (63 briefings in total)

    More countries will be added in due course, and we will be keeping this updated to cover the latest daily briefings available.

    The corpus is compiled to allow for further automated political discourse analysis (classification).

  6. COVID19 cases by Continent

    • kaggle.com
    zip
    Updated Jun 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juok (2020). COVID19 cases by Continent [Dataset]. https://www.kaggle.com/dsv/1211964
    Explore at:
    zip(1325064 bytes)Available download formats
    Dataset updated
    Jun 3, 2020
    Authors
    Juok
    Description

    Context

    Late in December 2019, the World Health Organisation (WHO) China Country Office obtained information about severe pneumonia of an unknown cause, detected in the city of Wuhan in Hubei province, China. This later turned out to be the novel coronavirus disease (COVID-19), an infectious disease caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) of the coronavirus family. The disease causes respiratory illness characterized by primary symptoms like cough, fever, and in more acute cases, difficulty in breathing. WHO later declared COVID-19 as a Pandemic because of its fast rate of spread across the Globe.

    Content

    The COVID-19 datasets organized by continent contain daily level information about the COVID-19 cases in the different continents of the world. It is a time-series data and the number of cases on any given day is cumulative. The original datasets can be found on this John Hopkins University Github repository. I will be updating the COVID-19 datasets on a daily basis, with every update from John Hopkins University. I have also included the World COVID-19 tests data scraped from Worldometer and 2020 world population also from [worldometer]((https://www.worldometers.info/world-population/population-by-country/).

    The datasets

    COVID-19 cases covid19_world.csv. It contains the cumulative number of COVID-19 cases from around the world since January 22, 2020, as compiled by John Hopkins University. covid19_asia.csv, covid19_africa.csv, covid19_europe.csv, covid19_northamerica.csv, covid19.southamerica.csv, covid19_oceania.csv, and covid19_others.csv. These contain the cumulative number of COVID-19 cases organized by the continent.

    Field description - ObservationDate: Date of observation in YY/MM/DD - Country_Region: name of Country or Region - Province_State: name of Province or State - Confirmed: the number of COVID-19 confirmed cases - Deaths: the number of deaths from COVID-19 - Recovered: the number of recovered cases - Active: the number of people still infected with COVID-19 Note: Active = Confirmed - (Deaths + Recovered)

    COVID-19 tests `covid19_tests.csv. It contains the cumulative number of COVID tests data from worldometer conducted since the onset of the pandemic. Data available from June 01, 2020.

    Field description Date: date in YY/MM/DD Country, Other: Country, Region, or dependency TotalTests: cumulative number of tests up till that date Population: population of Country, Region, or dependency Tests/1M pop: tests per 1 million of the population 1 Testevery X ppl: 1 test for every X number of people

    2020 world population world_population(2020).csv. It contains the 2020 world population as reported by woldometer.

    Field description Country (or dependency): Country or dependency Population (2020): population in 2020 Yearly Change: yearly change in population as a percentage Net Change: the net change in population Density(P/km2): population density Land Area(km2): land area Migrants(net): net number of migrants Fert. Rate: Fertility Rate Med. Age: median age Urban pop: urban population World Share: share of the world population as a percentage

    Acknowledgements

    1. John Hopkins University for making COVID-19 datasets available to the public: https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports
    2. John Hopkins University COVID-19 Dashboard: https://coronavirus.jhu.edu/map.html
    3. COVID-19 Africa dashboard: http://covid-19-africa.sen.ovh/
    4. Worldometer: https://www.worldometers.info/
    5. SRk for uploading a global COVID-19 dataset on Kaggle: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset#covid_19_data.csv
    6. United Nations Department of General Assembly and Conference Management: https://www.un.org/depts/DGACM/RegionalGroups.shtml
    7. wallpapercave.com: https://wallpapercave.com/covid-19-wallpapers

    Inspiration

    Possible Insights 1. The current number of COVID-19 cases in Africa 2. The current number of COVID-19 cases by country 3. The number of COVID-19 cases in Africa / African country(s) by May 30, 2020 (Any future date)

  7. d

    COVID Tracking Project — Testing in States

    • data.world
    csv, zip
    Updated Oct 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Associated Press (2024). COVID Tracking Project — Testing in States [Dataset]. https://data.world/associatedpress/covid-tracking-project-testing-in-states
    Explore at:
    zip, csvAvailable download formats
    Dataset updated
    Oct 14, 2024
    Authors
    The Associated Press
    Time period covered
    Jan 13, 2020 - Mar 8, 2021
    Description

    Updates

    April 29, 2020

    • The AP is now providing historical time series data for testing counts and death counts from The COVID Tracking Project. The counts are provided here unaltered, along with a population column with Census ACS-1 estimates and calculated testing rate and death rate columns.

    October 13, 2020

    The COVID Tracking Project is releasing more precise total testing counts, and has changed the way it is distributing the data that ends up on this site. Previously, total testing had been represented by positive tests plus negative tests. As states are beginning to report more specific testing counts, The COVID Tracking Project is moving toward reporting those numbers directly.

    This may make it more difficult to compare your state against others in terms of positivity rate, but the net effect is we now have more precise counts:

    • Total Test Encounters: Total tests increase by one for every individual that is tested that day. Additional tests for that individual on that day (i.e., multiple swabs taken at the same time) are not included

    • Total PCR Specimens: Total tests increase by one for every testing sample retrieved from an individual. Multiple samples from an individual on a single day can be included in the count

    • Unique People Tested: Total tests increase by one the first time an individual is tested. The count will not increase in later days if that individual is tested again – even months later

    These three totals are not all available for every state. The COVID Tracking Project prioritizes the different count types for each state in this order:

    1. Total Test Encounters

    2. Total PCR Specimens

    3. Unique People Tested

    If the state does not provide any of those totals directly, The COVID Tracking Project falls back to the initial calculation of total tests that it has provided up to this point: positive + negative tests.

    One of the above total counts will be the number present in the cumulative_total_test_results and total_test_results_increase columns.

    • The positivity rates provided on this site will divide confirmed cases by one of these total_test_results columns.

      • Due to these changes, we advise comparing positivity rates between states only if the states being compared have the same type of total test count.

    Overview

    The AP is using data collected by the COVID Tracking Project to measure COVID-19 testing across the United States.

    The COVID Tracking Project data is available at the state level in the United States. The AP has paired this data with population figures and has calculated testing rates and death rates per 1,000 people.

    This data is from The COVID Tracking Project API that is updated regularly throughout the day. Like all organizations dealing with data, The COVID Tracking Project is constantly refining and cleaning up their feed, so there may be brief moments where data does not appear correctly. At this link, you’ll find The COVID Tracking Project daily data reports, and a clean version of their feed.

    A Note on timing: - The COVID Tracking Project updates regularly throughout the day, but state numbers will come in at different times. The entire Tracking Project dataset will be updated between 4-5pm EDT daily. Keep this time in mind when reporting on stories comparing states. At certain times of day, one state may be more up to date than another. We have included the date_modified timestamp for state-level data, which represents the last time the state updated its data. The date_checked value in the state-level data reflects the last time The COVID Tracking Project checked the state source. We have also included the last_modified timestamp for the national-level data, which marks the last time the national data was updated.

    The AP is updating this dataset hourly at 45 minutes past the hour.

    To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.

    About the data

    Caveats

    • The total_people_tested counts do not include pending tests. They are the total number of tests that have returned positive or negative.
    • The process for collecting testing data is different for each state. The COVID Tracking Project makes note of the difficulties specific to each state on their main data page.

    Attribution

    This data should be credited to The COVID Tracking Project

    Contact

    Nicky Forster — nforster@ap.org

  8. Replication dataset and calculations for PIIE PB 20-9, When more delivers...

    • piie.com
    Updated Jun 25, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jérémie Cohen-Setton; Jean Pisani-Ferry (2020). Replication dataset and calculations for PIIE PB 20-9, When more delivers less: Comparing the US and French COVID-19 crisis responses, by Jérémie Cohen-Setton and Jean Pisani-Ferry. (2020). [Dataset]. https://www.piie.com/publications/policy-briefs/when-more-delivers-less-comparing-us-and-french-covid-19-crisis
    Explore at:
    Dataset updated
    Jun 25, 2020
    Dataset provided by
    Peterson Institute for International Economicshttp://www.piie.com/
    Authors
    Jérémie Cohen-Setton; Jean Pisani-Ferry
    Area covered
    France, United States, French
    Description

    This data package includes the underlying data and files to replicate the calculations, charts, and tables presented in When more delivers less: Comparing the US and French COVID-19 crisis responses, PIIE Policy Brief 20-9. If you use the data, please cite as: Cohen-Setton, Jérémie, and Jean Pisani-Ferry. (2020). When more delivers less: Comparing the US and French COVID-19 crisis responses. PIIE Policy Brief 20-9. Peterson Institute for International Economics.

  9. Data from: A dataset to model Levantine landcover and land-use change...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Dec 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Kempf; Michael Kempf (2023). A dataset to model Levantine landcover and land-use change connected to climate change, the Arab Spring and COVID-19 [Dataset]. http://doi.org/10.5281/zenodo.10396148
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 16, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael Kempf; Michael Kempf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 16, 2023
    Area covered
    Levant
    Description

    Overview

    This dataset is the repository for the following paper submitted to Data in Brief:

    Kempf, M. A dataset to model Levantine landcover and land-use change connected to climate change, the Arab Spring and COVID-19. Data in Brief (submitted: December 2023).

    The Data in Brief article contains the supplement information and is the related data paper to:

    Kempf, M. Climate change, the Arab Spring, and COVID-19 - Impacts on landcover transformations in the Levant. Journal of Arid Environments (revision submitted: December 2023).

    Description/abstract

    The Levant region is highly vulnerable to climate change, experiencing prolonged heat waves that have led to societal crises and population displacement. Since 2010, the area has been marked by socio-political turmoil, including the Syrian civil war and currently the escalation of the so-called Israeli-Palestinian Conflict, which strained neighbouring countries like Jordan due to the influx of Syrian refugees and increases population vulnerability to governmental decision-making. Jordan, in particular, has seen rapid population growth and significant changes in land-use and infrastructure, leading to over-exploitation of the landscape through irrigation and construction. This dataset uses climate data, satellite imagery, and land cover information to illustrate the substantial increase in construction activity and highlights the intricate relationship between climate change predictions and current socio-political developments in the Levant.

    Folder structure

    The main folder after download contains all data, in which the following subfolders are stored are stored as zipped files:

    “code” stores the above described 9 code chunks to read, extract, process, analyse, and visualize the data.

    “MODIS_merged” contains the 16-days, 250 m resolution NDVI imagery merged from three tiles (h20v05, h21v05, h21v06) and cropped to the study area, n=510, covering January 2001 to December 2022 and including January and February 2023.

    “mask” contains a single shapefile, which is the merged product of administrative boundaries, including Jordan, Lebanon, Israel, Syria, and Palestine (“MERGED_LEVANT.shp”).

    “yield_productivity” contains .csv files of yield information for all countries listed above.

    “population” contains two files with the same name but different format. The .csv file is for processing and plotting in R. The .ods file is for enhanced visualization of population dynamics in the Levant (Socio_cultural_political_development_database_FAO2023.ods).

    “GLDAS” stores the raw data of the NASA Global Land Data Assimilation System datasets that can be read, extracted (variable name), and processed using code “8_GLDAS_read_extract_trend” from the respective folder. One folder contains data from 1975-2022 and a second the additional January and February 2023 data.

    “built_up” contains the landcover and built-up change data from 1975 to 2022. This folder is subdivided into two subfolder which contain the raw data and the already processed data. “raw_data” contains the unprocessed datasets and “derived_data” stores the cropped built_up datasets at 5 year intervals, e.g., “Levant_built_up_1975.tif”.

    Code structure

    1_MODIS_NDVI_hdf_file_extraction.R


    This is the first code chunk that refers to the extraction of MODIS data from .hdf file format. The following packages must be installed and the raw data must be downloaded using a simple mass downloader, e.g., from google chrome. Packages: terra. Download MODIS data from after registration from: https://lpdaac.usgs.gov/products/mod13q1v061/ or https://search.earthdata.nasa.gov/search (MODIS/Terra Vegetation Indices 16-Day L3 Global 250m SIN Grid V061, last accessed, 09th of October 2023). The code reads a list of files, extracts the NDVI, and saves each file to a single .tif-file with the indication “NDVI”. Because the study area is quite large, we have to load three different (spatially) time series and merge them later. Note that the time series are temporally consistent.


    2_MERGE_MODIS_tiles.R


    In this code, we load and merge the three different stacks to produce large and consistent time series of NDVI imagery across the study area. We further use the package gtools to load the files in (1, 2, 3, 4, 5, 6, etc.). Here, we have three stacks from which we merge the first two (stack 1, stack 2) and store them. We then merge this stack with stack 3. We produce single files named NDVI_final_*consecutivenumber*.tif. Before saving the final output of single merged files, create a folder called “merged” and set the working directory to this folder, e.g., setwd("your directory_MODIS/merged").


    3_CROP_MODIS_merged_tiles.R


    Now we want to crop the derived MODIS tiles to our study area. We are using a mask, which is provided as .shp file in the repository, named "MERGED_LEVANT.shp". We load the merged .tif files and crop the stack with the vector. Saving to individual files, we name them “NDVI_merged_clip_*consecutivenumber*.tif. We now produced single cropped NDVI time series data from MODIS.
    The repository provides the already clipped and merged NDVI datasets.


    4_TREND_analysis_NDVI.R


    Now, we want to perform trend analysis from the derived data. The data we load is tricky as it contains 16-days return period across a year for the period of 22 years. Growing season sums contain MAM (March-May), JJA (June-August), and SON (September-November). December is represented as a single file, which means that the period DJF (December-February) is represented by 5 images instead of 6. For the last DJF period (December 2022), the data from January and February 2023 can be added. The code selects the respective images from the stack, depending on which period is under consideration. From these stacks, individual annually resolved growing season sums are generated and the slope is calculated. We can then extract the p-values of the trend and characterize all values with high confidence level (0.05). Using the ggplot2 package and the melt function from reshape2 package, we can create a plot of the reclassified NDVI trends together with a local smoother (LOESS) of value 0.3.
    To increase comparability and understand the amplitude of the trends, z-scores were calculated and plotted, which show the deviation of the values from the mean. This has been done for the NDVI values as well as the GLDAS climate variables as a normalization technique.


    5_BUILT_UP_change_raster.R


    Let us look at the landcover changes now. We are working with the terra package and get raster data from here: https://ghsl.jrc.ec.europa.eu/download.php?ds=bu (last accessed 03. March 2023, 100 m resolution, global coverage). Here, one can download the temporal coverage that is aimed for and reclassify it using the code after cropping to the individual study area. Here, I summed up different raster to characterize the built-up change in continuous values between 1975 and 2022.


    6_POPULATION_numbers_plot.R


    For this plot, one needs to load the .csv-file “Socio_cultural_political_development_database_FAO2023.csv” from the repository. The ggplot script provided produces the desired plot with all countries under consideration.


    7_YIELD_plot.R


    In this section, we are using the country productivity from the supplement in the repository “yield_productivity” (e.g., "Jordan_yield.csv". Each of the single country yield datasets is plotted in a ggplot and combined using the patchwork package in R.


    8_GLDAS_read_extract_trend


    The last code provides the basis for the trend analysis of the climate variables used in the paper. The raw data can be accessed https://disc.gsfc.nasa.gov/datasets?keywords=GLDAS%20Noah%20Land%20Surface%20Model%20L4%20monthly&page=1 (last accessed 9th of October 2023). The raw data comes in .nc file format and various variables can be extracted using the [“^a variable name”] command from the spatraster collection. Each time you run the code, this variable name must be adjusted to meet the requirements for the variables (see this link for abbreviations: https://disc.gsfc.nasa.gov/datasets/GLDAS_CLSM025_D_2.0/summary, last accessed 09th of October 2023; or the respective code chunk when reading a .nc file with the ncdf4 package in R) or run print(nc) from the code or use names(the spatraster collection).
    Choosing one variable, the code uses the MERGED_LEVANT.shp mask from the repository to crop and mask the data to the outline of the study area.
    From the processed data, trend analysis are conducted and z-scores were calculated following the code described above. However, annual trends require the frequency of the time series analysis to be set to value = 12. Regarding, e.g., rainfall, which is measured as annual sums and not means, the chunk r.sum=r.sum/12 has to be removed or set to r.sum=r.sum/1 to avoid calculating annual mean values (see other variables). Seasonal subset can be calculated as described in the code. Here, 3-month subsets were chosen for growing seasons, e.g. March-May (MAM), June-July (JJA), September-November (SON), and DJF (December-February, including Jan/Feb of the consecutive year).
    From the data, mean values of 48 consecutive years are calculated and trend analysis are performed as describe above. In the same way, p-values are extracted and 95 % confidence level values are marked with dots on the raster plot. This analysis can be performed with a much longer time series, other variables, ad different spatial extent across the globe due to the availability of the GLDAS variables.

  10. Z

    Data from: PANACEA dataset - Heterogeneous COVID-19 Claims

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Procter, Rob (2022). PANACEA dataset - Heterogeneous COVID-19 Claims [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6493846
    Explore at:
    Dataset updated
    Jul 15, 2022
    Dataset provided by
    He, Yulan
    Liakata, Maria
    Procter, Rob
    Zubiaga, Arkaitz
    Arana-Catania, Miguel
    Kochkina, Elena
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The peer-reviewed publication for this dataset has been presented in the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), and can be accessed here: https://arxiv.org/abs/2205.02596. Please cite this when using the dataset.

    This dataset contains a heterogeneous set of True and False COVID claims and online sources of information for each claim.

    The claims have been obtained from online fact-checking sources, existing datasets and research challenges. It combines different data sources with different foci, thus enabling a comprehensive approach that combines different media (Twitter, Facebook, general websites, academia), information domains (health, scholar, media), information types (news, claims) and applications (information retrieval, veracity evaluation).

    The processing of the claims included an extensive de-duplication process eliminating repeated or very similar claims. The dataset is presented in a LARGE and a SMALL version, accounting for different degrees of similarity between the remaining claims (excluding respectively claims with a 90% and 99% probability of being similar, as obtained through the MonoT5 model). The similarity of claims was analysed using BM25 (Robertson et al., 1995; Crestani et al., 1998; Robertson and Zaragoza, 2009) with MonoT5 re-ranking (Nogueira et al., 2020), and BERTScore (Zhang et al., 2019).

    The processing of the content also involved removing claims making only a direct reference to existing content in other media (audio, video, photos); automatically obtained content not representing claims; and entries with claims or fact-checking sources in languages other than English.

    The claims were analysed to identify types of claims that may be of particular interest, either for inclusion or exclusion depending on the type of analysis. The following types were identified: (1) Multimodal; (2) Social media references; (3) Claims including questions; (4) Claims including numerical content; (5) Named entities, including: PERSON − People, including fictional; ORGANIZATION − Companies, agencies, institutions, etc.; GPE − Countries, cities, states; FACILITY − Buildings, highways, etc. These entities have been detected using a RoBERTa base English model (Liu et al., 2019) trained on the OntoNotes Release 5.0 dataset (Weischedel et al., 2013) using Spacy.

    The original labels for the claims have been reviewed and homogenised from the different criteria used by each original fact-checker into the final True and False labels.

    The data sources used are:

    The LARGE dataset contains 5,143 claims (1,810 False and 3,333 True), and the SMALL version 1,709 claims (477 False and 1,232 True).

    The entries in the dataset contain the following information:

    • Claim. Text of the claim.

    • Claim label. The labels are: False, and True.

    • Claim source. The sources include mostly fact-checking websites, health information websites, health clinics, public institutions sites, and peer-reviewed scientific journals.

    • Original information source. Information about which general information source was used to obtain the claim.

    • Claim type. The different types, previously explained, are: Multimodal, Social Media, Questions, Numerical, and Named Entities.

    Funding. This work was supported by the UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1, EP/T017112/1). ML and YH are supported by Turing AI Fellowships funded by the UK Research and Innovation (grant no. EP/V030302/1, EP/V020579/1).

    References

    • Arana-Catania M., Kochkina E., Zubiaga A., Liakata M., Procter R., He Y.. Natural Language Inference with Self-Attention for Veracity Assessment of Pandemic Claims. NAACL 2022 https://arxiv.org/abs/2205.02596

    • Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp,109:109.

    • Fabio Crestani, Mounia Lalmas, Cornelis J Van Rijsbergen, and Iain Campbell. 1998. “is this document relevant?. . . probably” a survey of probabilistic models in information retrieval. ACM Computing Surveys (CSUR), 30(4):528–552.

    • Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc.

    • Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre-trained sequence-to-sequence model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 708–718.

    • Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.

    • Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.

    • Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23.

    • Limeng Cui and Dongwon Lee. 2020. Coaid: Covid-19 healthcare misinformation dataset. arXiv preprint arXiv:2006.00885.

    • Yichuan Li, Bohan Jiang, Kai Shu, and Huan Liu. 2020. Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinformation.

    • Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. COVIDLies: Detecting COVID-19 misinformation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online. Association for Computational Linguistics.

    • Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. Trec-covid: constructing a pandemic information retrieval test collection. In ACM SIGIR Forum, volume 54, pages 1–12. ACM New York, NY, USA.

  11. TRACES Bulgarian Twitter Dataset on Covid-19 Annotated with Linguistic...

    • zenodo.org
    • data.niaid.nih.gov
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Irina Temnikova; Irina Temnikova; Silvia Gargova; Veneta Kireva; Tsvetelina Stefanova; Silvia Gargova; Veneta Kireva; Tsvetelina Stefanova (2024). TRACES Bulgarian Twitter Dataset on Covid-19 Annotated with Linguistic Markers of Lies [Dataset]. http://doi.org/10.5281/zenodo.7614247
    Explore at:
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Irina Temnikova; Irina Temnikova; Silvia Gargova; Veneta Kireva; Tsvetelina Stefanova; Silvia Gargova; Veneta Kireva; Tsvetelina Stefanova
    Description

    This dataset has been created within Project TRACES (more information: https://traces.gate-ai.eu/). The dataset contains 61411 tweet IDs of tweets, written in Bulgarian, with annotations. The dataset can be used for general use or for building lies and disinformation detection applications.

    Note: this dataset is not fact-checked, the social media messages have been retrieved via keywords. For fact-checked datasets, see our other datasets.

    The tweets (written between 1 Jan 2020 and 28 June 2022) have been collected via Twitter API under academic access in June 2022 with the following keywords:

    • (Covid OR коронавирус OR Covid19 OR Covid-19 OR Covid_19) - without replies and without retweets

    • (Корона OR корона OR Corona OR пандемия OR пандемията OR Spikevax OR SARS-CoV-2 OR бустерна доза) - with replies, but without retweets

    Explanations of which fields can be used as markers of lies (or of intentional disinformation) are provided in our forthcoming paper (please cite it when using this dataset):

    Irina Temnikova, Silvia Gargova, Ruslana Margova, Veneta Kireva, Ivo Dzhumerov, Tsvetelina Stefanova and Hristiana Nikolaeva (2023) New Bulgarian Resources for Detecting Disinformation. 10th Language and Technology
    Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC'23). Poznań. Poland.

  12. Replication dataset for PIIE PB 23-8, How did Korea’s fiscal accounts fare...

    • piie.com
    Updated Jun 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joseph E. Gagnon; Asher Rose (2023). Replication dataset for PIIE PB 23-8, How did Korea’s fiscal accounts fare during the COVID-19 pandemic? by Joseph E. Gagnon and Asher Rose (2023). [Dataset]. https://www.piie.com/publications/policy-briefs/2023/how-did-koreas-fiscal-accounts-fare-during-covid-19-pandemic
    Explore at:
    Dataset updated
    Jun 26, 2023
    Dataset provided by
    Peterson Institute for International Economicshttp://www.piie.com/
    Authors
    Joseph E. Gagnon; Asher Rose
    Area covered
    Korea
    Description

    This data package includes the underlying data files to replicate the data, tables, and charts presented in How did Korea’s fiscal accounts fare during the COVID-19 pandemic? PIIE Policy Brief 23-8.

    If you use the data, please cite as: Gagnon, Joseph E., and Asher Rose. 2023. How did Korea’s fiscal accounts fare during the COVID-19 pandemic? PIIE Policy Brief 23-8. Washington, DC: Peterson Institute for International Economics.

  13. Coronavirus Twitters NLP: Text Classification (PT)

    • kaggle.com
    zip
    Updated Jul 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francielle Vargas (2020). Coronavirus Twitters NLP: Text Classification (PT) [Dataset]. https://www.kaggle.com/francielleavargas/opcovidbr
    Explore at:
    zip(56500 bytes)Available download formats
    Dataset updated
    Jul 28, 2020
    Authors
    Francielle Vargas
    Description

    OPCovid-BR

    Twitter Data on Covid-19 for Aspect-Based Sentiment Analysis in Brazilian Portuguese

    Context

    Opinion mining or sentiment analysis is a relevant task above all when the world lives a pandemic.Extracting and classifying fine-grained opinion and polarity may provide insight to public and private agencies making better decision against the coronavirus pandemic. Therefore, in this paper, we provide a new dataset of twitter data related to novel coronavirus (2019-nCoV), which it composed by 600 twitters manually annotated with aspects-level target and binary polarity (positive negative).

    Content

    The OPCovid-Br consists of a dataset of twitter data for fine-grained opinion mining or sentiment analysis applications in Portuguese Language. The OPCovid-BR is composed by 600 annotated twitters with opinion aspects, as well as the binary document polarity (positive or negative).

    Acknowledgements

    The authors are grateful to CAPES and CNPq for supporting this work.

    Citing

    Vargas, F.A.,Santos, R.S.S. and Rocha, P.R. (2020). Identifying fine-grained opinion and classifying polarity of twitter data on coronavirus pandemic. Proceedings of the 9th Brazilian Conference on Intelligent Systems (BRACIS 2020), Rio Grande, RS, Brazil.

    Bibtex

    @inproceedings{VargasEtAll2020, author = {Francielle Alves Vargas and Rodolfo Sanches Saraiva Dos Santos and Pedro Regattieri Rocha}, title = {Identifying fine-grained opinion and classifying polarity of twitter data on coronavirus pandemic}, booktitle = {Proceedings of the 9th Brazilian Conference on Intelligent Systems (BRACIS 2020)}, pages = {01-10}, year = {2020}, address = {Rio Grande, RS, Brazil}, crossref = {http://bracis2020.c3.furg.br/acceptedPapers.html}, }

  14. g

    USsummary time

    • covid-hub.gio.georgia.gov
    • prep-response-portal.napsgfoundation.org
    • +1more
    Updated Apr 11, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CivicImpactJHU (2020). USsummary time [Dataset]. https://covid-hub.gio.georgia.gov/datasets/4cb598ae041348fb92270f102a6783cb
    Explore at:
    Dataset updated
    Apr 11, 2020
    Dataset authored and provided by
    CivicImpactJHU
    Area covered
    Description

    On March 10, 2023, the Johns Hopkins Coronavirus Resource Center ceased collecting and reporting of global COVID-19 data. For updated cases, deaths, and vaccine data please visit the following sources:Global: World Health Organization (WHO)U.S.: U.S. Centers for Disease Control and Prevention (CDC)For more information, visit the Johns Hopkins Coronavirus Resource Center.This feature layer contains the most up-to-date COVID-19 cases for the US. Data is pulled from the Coronavirus COVID-19 Global Cases by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University, the Red Cross, the Census American Community Survey, and the Bureau of Labor and Statistics, and aggregated at the US county level. This web map created and maintained by the Centers for Civic Impact at the Johns Hopkins University, and is supported by the Esri Living Atlas team and JHU Data Services. It is used in the COVID-19 United States Cases by County dashboard. For more information on Johns Hopkins University’s response to COVID-19, visit the Johns Hopkins Coronavirus Resource Center where our experts help to advance understanding of the virus, inform the public, and brief policymakers in order to guide a response, improve care, and save lives.

  15. Coronavirus in Italy (COVID-19)

    • kaggle.com
    zip
    Updated Jan 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul Mooney (2022). Coronavirus in Italy (COVID-19) [Dataset]. https://www.kaggle.com/paultimothymooney/coronavirus-in-italy
    Explore at:
    zip(363311652 bytes)Available download formats
    Dataset updated
    Jan 3, 2022
    Authors
    Paul Mooney
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Area covered
    Italy
    Description

    Context

    To inform citizens and make the collected data available, the Department of Civil Protection has developed an interactive geographic dashboard accessible at the addresses http://arcg.is/C1unv (desktop version) and http://arcg.is/081a51 (mobile version) and makes available, with CC-BY-4.0 license, the following information updated daily at 18:30 (after the Head of Department press conference). For more detail, see https://github.com/pcm-dpc/COVID-19.

    Content

    COVID-19 data Italy

    National trend Json data Provinces data Regions data Summary cards Areas Repository structure COVID-19 / │ ├── national-trend / │ ├── dpc-covid19-eng-national-trend-yyyymmdd.csv ├── areas / │ ├── geojson │ │ ├── dpc-covid19-ita-aree.geojson │ ├── shp │ │ ├── dpc-covid19-eng-areas.shp ├── data-provinces / │ ├── dpc-covid19-ita-province-yyyymmdd.csv ├── data-json / │ ├── dpc-covid19-eng - *. Json ├── data-regions / │ ├── dpc-covid19-eng-regions-yyyymmdd.csv ├── summary-sheets / │ ├── provinces │ │ ├── dpc-covid19-ita-scheda-province-yyyymmdd.pdf │ ├── regions │ │ ├── dpc-covid19-eng-card-regions-yyyymmdd.pdf

    Data by Region Directory: data-regions Daily file structure: dpc-covid19-ita-regions-yyyymmdd.csv (dpc-covid19-ita-regions-20200224.csv) Overall file: dpc-covid19-eng-regions.csv An overall JSON file of all dates is made available in the "data-json" folder: dpc-covid19-eng-regions.json

    Data by Province Directory: data-provinces Daily file structure: dpc-covid19-ita-province-yyyymmdd.csv (dpc-covid19-ita-province-20200224.csv) Overall file: dpc-covid19-ita-province.csv

    Acknowledgements

    Banner photo by CDC on Unsplash

    Data from https://github.com/pcm-dpc/COVID-19 released under a CC 4.0 license. See https://github.com/pcm-dpc/COVID-19 for more detail.

  16. T

    India Coronavirus COVID-19 Cases

    • tradingeconomics.com
    csv, excel, json, xml
    Updated Dec 15, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TRADING ECONOMICS (2017). India Coronavirus COVID-19 Cases [Dataset]. https://tradingeconomics.com/india/coronavirus-cases
    Explore at:
    excel, xml, csv, jsonAvailable download formats
    Dataset updated
    Dec 15, 2017
    Dataset authored and provided by
    TRADING ECONOMICS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 4, 2020 - May 17, 2023
    Area covered
    India
    Description

    India recorded 44983152 Coronavirus Cases since the epidemic began, according to the World Health Organization (WHO). In addition, India reported 531794 Coronavirus Deaths. This dataset includes a chart with historical data for India Coronavirus Cases.

  17. National Social Life, Health, and Aging Project (NSHAP): Round 3 and...

    • icpsr.umich.edu
    ascii, delimited, r +3
    Updated Sep 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Waite, Linda J.; Cagney, Kathleen A.; Dale, William; Hawkley, Louise C.; Huang, Elbert S.; Lauderdale, Diane S.; Laumann, Edward O.; McClintock, Martha K.; O'Muircheartaigh, Colm A.; Schumm, L. Philip (2024). National Social Life, Health, and Aging Project (NSHAP): Round 3 and COVID-19 Study, [United States], 2015-2016, 2020-2021 [Dataset]. http://doi.org/10.3886/ICPSR36873.v9
    Explore at:
    stata, sas, delimited, ascii, r, spssAvailable download formats
    Dataset updated
    Sep 9, 2024
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Waite, Linda J.; Cagney, Kathleen A.; Dale, William; Hawkley, Louise C.; Huang, Elbert S.; Lauderdale, Diane S.; Laumann, Edward O.; McClintock, Martha K.; O'Muircheartaigh, Colm A.; Schumm, L. Philip
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/36873/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36873/terms

    Time period covered
    2015 - 2016
    Area covered
    United States
    Description

    The National Social Life, Health and Aging Project (NSHAP) is a population-based study of health and social factors on a national scale, aiming to understand the well-being of older, community-dwelling Americans by examining the interactions among physical health, illness, medication use, cognitive function, emotional health, sensory function, health behaviors, and social connectedness. It is designed to provide health providers, policy makers, and individuals with useful information and insights into these factors, particularly on social and intimate relationships. The National Opinion Research Center (NORC), along with Principal Investigators at the University of Chicago, conducted more than 3,000 interviews during 2005 and 2006 with a nationally representative sample of adults aged 57 to 85. Face-to-face interviews and biomeasure collection took place in respondents' homes. Round 3 was conducted from September 2015 through November 2016, where 2,409 surviving Round 2 respondents were re-interviewed, and a New Cohort consisting of adults born between 1948 and 1965 together with their spouses or co-resident partners was added. All together, 4,777 respondents were interviewed in Round 3. The following files constitute Round 3: Core Data, Social Networks Data, Disposition of Returning Respondent Partner Data, and Proxy Data. Included in the Core files (Datasets 1 and 2) are demographic characteristics, such as gender, age, education, race, and ethnicity. Other topics covered respondents' social networks, social and cultural activity, physical and mental health including cognition, well-being, illness, history of sexual and intimate partnerships and patient-physician communication, in addition to bereavement items. In addition data on a panel of biomeasures including, weight, waist circumference, height, and blood pressure was collected. The Social Networks (Datasets 3 and 4) files detail respondents' current relationship status with each person identified on the network roster. The Disposition of Returning Respondent Partner (Datasets 5 and 6) files detail information derived from Section 6A items regarding the partner from Rounds 1 and 2 within the questionnaire. This provides a complete history for respondent partners across both rounds. The Proxy (Datasets 7 and 8) files contain final health data for Round 1 and Round 2 respondents who could not participate in NSHAP due to disability or death. The COVID-19 sub-study, administered to NSHAP R3 respondents in the Fall of 2020, was a brief self-report questionnaire that probed how the coronavirus pandemic changed older adults' lives. The COVID-19 sub-study questionnaire was limited to assessing specific domains in which respondents may have been affected by the coronavirus pandemic, including: (1) COVID experiences, (2) health and health care, (3) job and finances, (4) social support, (5) marital status and relationship quality, (6) social activity and engagement, (7) living arrangements, (8) household composition and size, (9) mental health, (10) elder mistreatment, (11) health behaviors, and (12) positive impacts of the coronavirus pandemic. Questions about engagement in racial justice issues since the death of George Floyd in police custody were also added to facilitate analysis of the independent and compounding effects of both the COVID-19 pandemic and reckoning with longstanding racial injustice in America.

  18. e

    COVID-19: Basic overview

    • data.europa.eu
    csv
    Updated Sep 8, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ministerstvo zdravotnictví (2024). COVID-19: Basic overview [Dataset]. https://data.europa.eu/data/datasets/https-data-mzcr-cz-lkod-00024341-datove-sady-59/embed
    Explore at:
    csvAvailable download formats
    Dataset updated
    Sep 8, 2024
    Dataset authored and provided by
    Ministerstvo zdravotnictví
    License

    https://data.gov.cz/zdroj/datové-sady/00024341/ce2d7c22cc9c45d774086502e69d37dc/distribuce/3423a57c60ed2a05ca2fd6db7cc12bfb/podmínky-užitíhttps://data.gov.cz/zdroj/datové-sady/00024341/ce2d7c22cc9c45d774086502e69d37dc/distribuce/3423a57c60ed2a05ca2fd6db7cc12bfb/podmínky-užití

    Description

    Brief overview of basic epidemiological data on the COVID-19 pandemic in the Czech Republic. The dataset contains the current cumulative number of PCR and antigen tests performed (including information for the previous day), confirmed cases in total and in the 65+ age group (including information for the previous day), active cases (including reinfections), cured, death, vaccination and hopitalised patients.

  19. DCMS Coronavirus Impact Business Survey - Round 2

    • gov.uk
    Updated Sep 23, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department for Digital, Culture, Media & Sport (2020). DCMS Coronavirus Impact Business Survey - Round 2 [Dataset]. https://www.gov.uk/government/statistics/dcms-coronavirus-impact-business-survey-round-2
    Explore at:
    Dataset updated
    Sep 23, 2020
    Dataset provided by
    GOV.UKhttp://gov.uk/
    Authors
    Department for Digital, Culture, Media & Sport
    Description

    These are the key findings from the second of three rounds of the DCMS Coronavirus Business Survey. These surveys are being conducted to help DCMS understand how our sectors are responding to the ongoing Coronavirus pandemic. The data collected is not longitudinal as responses are voluntary, meaning that businesses have no obligation to complete multiple rounds of the survey and businesses that did not submit a response to one round are not excluded from response collection in following rounds.

    The indicators and analysis presented in this bulletin are based on responses from the voluntary business survey, which captures organisations responses on how their turnover, costs, workforce and resilience have been affected by the coronavirus (COVID-19) outbreak. The results presented in this release are based on 3,870 completed responses collected between 17 August and 8 September 2020.

    1. Experimental Statistics

    This is the first time we have published these results as Official Statistics. An earlier round of the business survey can be found on gov.uk.

    We have designated these as Experimental Statistics, which are newly developed or innovative statistics. These are published so that users and stakeholders can be involved in the assessment of their suitability and quality at an early stage.

    We expect to publish a third round of the survey before the end of the financial year. To inform that release, we would welcome any user feedback on the presentation of these results to evidence@dcms.gov.uk by the end of November 2020.

    2. Data sources

    The survey was run simultaneously through DCMS stakeholder engagement channels and via a YouGov panel.

    The two sets of results have been merged to create one final dataset.

    Invitations to submit a response to the survey were circulated to businesses in relevant sectors through DCMS stakeholder engagement channels, prompting 2,579 responses.

    YouGov’s business omnibus panel elicited a further 1,288 responses. YouGov’s respondents are part of their panel of over one million adults in the UK. A series of pre-screened information on these panellists allows YouGov to target senior decision-makers of organisations in DCMS sectors.

    3. Quality

    One purpose of the survey is to highlight the characteristics of organisations in DCMS sectors whose viability is under threat in order to shape further government support. The timeliness of these results is essential, and there are some limitations, arising from the need for this timely information:

    • Estimates from the DCMS Coronavirus (COVID-19) Impact Business Survey are currently unweighted (i.e., each business was assigned the same weight regardless of turnover, size or industry) and should be treated with caution when used to evaluate the impact of COVID-19 across the UK economy.
    • Survey responses through DCMS stakeholder comms are likely to contain an element of self-selection bias as those businesses that are more severely negatively affected have a greater incentive to report their experience.
    • Due to time constraints, we are yet to undertake any statistical significance testing or provided confidence intervals

    The UK Statistics Authority

    This release is published in accordance with the Code of Practice for Statistics, as produced by the UK Statistics Authority. The Authority has the overall objective of promoting and safeguarding the production and publication of official statistics that serve the public good. It monitors and reports on all official statistics, and promotes good practice in this area.

    The responsible statistician for this release is Alex Bjorkegren. For further details about the estimates, or to be added to a distribution list for future updates, please email us at evidence@dcms.gov.uk.

    Pre-release access

    The document above contains a list of ministers and officials who have received privileged early access to this release. In line with best practice, the list has been kept to a minimum and those given access for briefing purposes had a maximum of 24 hours.

  20. i

    Data Bandwidth Measurement for Conference Call Data Usage

    • ieee-dataport.org
    Updated Sep 18, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dikamsiyochi Young UDOCHI (2020). Data Bandwidth Measurement for Conference Call Data Usage [Dataset]. http://doi.org/10.21227/g9v0-hm89
    Explore at:
    Dataset updated
    Sep 18, 2020
    Dataset provided by
    IEEE Dataport
    Authors
    Dikamsiyochi Young UDOCHI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    During the first half of 2020, the COVID-19 pandemic changed the social gathering lifestyle to online business and social interaction. The worldwide imposed travel bans and national lockdown prevented social gatherings, making learning institutions and businesses to adopt an online platform for learning and business transactions. This development led to the incorporation of video conferencing into daily activities. This data article presents broadband data usage measurement collected using Glasswire software on various conference calls made between July and August. The services considered in this work are Google Meet, Zoom, Mixir, and Hangout. The data were recorded in Microsoft Excel 2016, running on a personal computer. The data was cleaned and processed using google colaboratory, which runs Python scripts on the browser. Exploratory data analysis is conducted on the data set using linear regression to model a predictive model to assess the best performance model that offers the best quality of service for online video and voice conferencing. The data is necessary to learning institutions using online programs and to learners accessing online programs in a smart city and developing countries. The data is presented in tables and graphs

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
data.ct.gov (2023). COVID-19 Cases and Deaths by Age Group - ARCHIVE [Dataset]. https://catalog.data.gov/dataset/covid-19-cases-and-deaths-by-age-group

COVID-19 Cases and Deaths by Age Group - ARCHIVE

Explore at:
Dataset updated
Aug 12, 2023
Dataset provided by
data.ct.gov
Description

Note: DPH is updating and streamlining the COVID-19 cases, deaths, and testing data. As of 6/27/2022, the data will be published in four tables instead of twelve. The COVID-19 Cases, Deaths, and Tests by Day dataset contains cases and test data by date of sample submission. The death data are by date of death. This dataset is updated daily and contains information back to the beginning of the pandemic. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Cases-Deaths-and-Tests-by-Day/g9vi-2ahj. The COVID-19 State Metrics dataset contains over 93 columns of data. This dataset is updated daily and currently contains information starting June 21, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-State-Level-Data/qmgw-5kp6 . The COVID-19 County Metrics dataset contains 25 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-County-Level-Data/ujiq-dy22 . The COVID-19 Town Metrics dataset contains 16 columns of data. This dataset is updated daily and currently contains information starting June 16, 2022 to the present. The data can be found at https://data.ct.gov/Health-and-Human-Services/COVID-19-Town-Level-Data/icxw-cada . To protect confidentiality, if a town has fewer than 5 cases or positive NAAT tests over the past 7 days, those data will be suppressed. COVID-19 cases and associated deaths that have been reported among Connecticut residents, broken out by age group. All data in this report are preliminary; data for previous dates will be updated as new reports are received and data errors are corrected. Deaths reported to the either the Office of the Chief Medical Examiner (OCME) or Department of Public Health (DPH) are included in the daily COVID-19 update. Data are reported daily, with timestamps indicated in the daily briefings posted at: portal.ct.gov/coronavirus. Data are subject to future revision as reporting changes. Starting in July 2020, this dataset will be updated every weekday. Additional notes: A delay in the data pull schedule occurred on 06/23/2020. Data from 06/22/2020 was processed on 06/23/2020 at 3:30 PM. The normal data cycle resumed with the data for 06/23/2020. A network outage on 05/19/2020 resulted in a change in the data pull schedule. Data from 5/19/2020 was processed on 05/20/2020 at 12:00 PM. Data from 5/20/2020 was processed on 5/20/2020 8:30 PM. The normal data cycle resumed on 05/20/2020 with the 8:30 PM data pull. As a result of the network outage, the timestamp on the datasets on the Open Data Portal differ from the timestamp in DPH's daily PDF reports. Starting 5/10/2021, the date field will represent the date this data was updated on data.ct.gov. Previously the date the data was pulled by DPH was listed, which typically coincided with the date before the data was published on data.ct.gov. This change was made to standardize the COVID-19 data sets on data.ct.gov.

Search
Clear search
Close search
Google apps
Main menu