Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset contains the list of COVID Fake News/Claims which is shared all over the internet.
Content
Headlines: String attribute consisting of the headlines/fact shared.
Outcome: It is binary data where 0 means the headline is fake and 1 means that it is true.
Inspiration
In many research portals, there was this common question in which the combined fake news dataset is available or not. This led to the publication of this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
India
A global study conducted in March 2020 gathered data on consumers' attitudes to, experiences of, and issues with news consumption regarding the coronavirus pandemic, and found that ** percent of respondents were concerned about the amount of fake news being spread about the virus, which would impede their efforts to find out the facts that they need to stay updated. Others were met with challenges when seeking out trustworthy and reliable information, and ** percent felt that the public should be given more coronavirus news and updates from scientists and less from politicians.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The United States of America
The dataset contains fake and real news. There are 16898 unique rows that points out the numbers of news as well. The dataset is merged from two datasets one is from different source of CBC news (link: https://zenodo.org/record/4722470) and other is from different web portals (link: https://zenodo.org/record/4282522). Data Description: Text: Text contains the news that is either fake or real. Outcome: Contains either fake or real which is the status of the news. Data source link 1: https://www.kaggle.com/ryanxjhan/cbc-news-coronavirusarticles-march-26 Data source link 2: https://zenodo.org/record/4722470
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Coronavirus disease 2019 (COVID19) time series that lists confirmed cases, reported deaths, and reported recoveries. Data is broken down by country (and sometimes by sub-region).
Coronavirus disease (COVID19) is caused by severe acute respiratory syndrome Coronavirus 2 (SARSCoV2) and has had an effect worldwide. On March 11, 2020, the World Health Organization (WHO) declared it a pandemic, currently indicating more than 118,000 cases of coronavirus disease in more than 110 countries and territories around the world.
This dataset contains the latest news related to Covid-19 and it was fetched with the help of Newsdata.io news API.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Free dataset from news/message boards/blogs about CoronaVirus (4 month of data - 5.2M posts). The time frame of the data is Dec/2019 - March/2020. The posts are in English mentioning at least one of the following: "Covid" OR CoronaVirus OR "Corona Virus".
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of a collection of true and fake news related to COVID-19. The dataset consists of news between the period of December 2019- July 2020.
According to a study conducted in March 2020, ** percent of adults worldwide aged between 18 and 35 years old were getting most of their information about the coronavirus pandemic via social media, compared to ** percent of those aged 55 or above. Major news organizations were overall a more popular source of information about COVID-19, but younger consumers were more evenly split in terms of which platforms they were using the most to keep themselves updated about the virus, whereas older adults were far more likely to turn to major news outlets.
JHU Coronavirus COVID-19 Global Cases, by country
PHS is updating the Coronavirus Global Cases dataset weekly, Monday, Wednesday and Friday from Cloud Marketplace.
This data comes from the data repository for the 2019 Novel Coronavirus Visual Dashboard operated by the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE). This database was created in response to the Coronavirus public health emergency to track reported cases in real-time. The data include the location and number of confirmed COVID-19 cases, deaths, and recoveries for all affected countries, aggregated at the appropriate province or state. It was developed to enable researchers, public health authorities and the general public to track the outbreak as it unfolds. Additional information is available in the blog post.
Visual Dashboard (desktop): https://www.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6
Included Data Sources are:
%3C!-- --%3E
**Terms of Use: **
This GitHub repo and its contents herein, including all data, mapping, and analysis, copyright 2020 Johns Hopkins University, all rights reserved, is provided to the public strictly for educational and academic research purposes. The Website relies upon publicly available data from multiple sources, that do not always agree. The Johns Hopkins University hereby disclaims any and all representations and warranties with respect to the Website, including accuracy, fitness for use, and merchantability. Reliance on the Website for medical guidance or use of the Website in commerce is strictly prohibited.
**U.S. county-level characteristics relevant to COVID-19 **
Chin, Kahn, Krieger, Buckee, Balsari and Kiang (forthcoming) show that counties differ significantly in biological, demographic and socioeconomic factors that are associated with COVID-19 vulnerability. A range of publicly available county-specific data identifying these key factors, guided by international experiences and consideration of epidemiological parameters of importance, have been combined by the authors and are available for use:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains world news related to Covid-19 and vaccine and also with the news article's available metadata.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A corpus of news articles on the topic of the covid-19 pandemic, published in major Slovenian daily newspapers and news portals in the six-month early pandemic period (March 2020 to September 2020).
The corpus is designed to facilitate research on crisis discourses, crisis communication, as well as pandemic-time linguistic innovation. It is available in plain text version and XML with full metadata.
Covid-NEWS-SLO is complemented with a separate corpus of citizen metalanguage comments, i.e. online comments to the news articles, available as Covid-NEWS-Comm-SLO. Parallel versions from Croatia and Serbia are also available.
The project leading to this publication has received funding from the European Union’s Horizon 2020 research and innovation programme under the H2020-EU.4. - SPREADING EXCELLENCE AND WIDENING PARTICIPATION programme Widening fellowships grant agreement No 101038047.
As the United States battles the coronavirus, news consumers across the country have been attempting to keep themselves updated with how the pandemic is progressing, and a survey held in March 2020 revealed that the most trusted news source for details on COVID-19 was the CDC, with ** percent of respondents saying that they trusted the centers to provide accurate information on the topic. Following closely behind was the World Health Organization and then the state government, but just ** percent of consumers said that they trusted social media sites to publish reliable and accurate news about the coronavirus outbreak.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A corpus of news articles on the topic of the covid-19 pandemic, published in major Croatian daily newspapers and news portals in the six-month early pandemic period (March 2020 to September 2020).
The corpus is designed to facilitate research on crisis discourses, crisis communication, as well as pandemic-time linguistic innovation. It is available in plain text version and XML with full metadata.
Covid-NEWS-HR is complemented with a separate corpus of citizen metalanguage comments, i.e. online comments to the news articles, available as Covid-NEWS-Comm-HR. Parallel versions from Slovenia and Serbia are also available.
The project leading to this publication has received funding from the European Union’s Horizon 2020 research and innovation programme under the H2020-EU.4. - SPREADING EXCELLENCE AND WIDENING PARTICIPATION programme Widening fellowships grant agreement No 101038047.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Introduction
There are several works based on Natural Language Processing on newspaper reports. Mining opinions from headlines [ 1 ] using Standford NLP and SVM by Rameshbhaiet. Al.compared several algorithms on a small and large dataset. Rubinet. al., in their paper [ 2 ], created a mechanism to differentiate fake news from real ones by building a set of characteristics of news according to their types. The purpose was to contribute to the low resource data available for training machine learning algorithms. Doumitet. al.in [ 3 ] have implemented LDA, a topic modeling approach to study bias present in online news media.
However, there are not many NLP research invested in studying COVID-19. Most applications include classification of chest X-rays and CT-scans to detect presence of pneumonia in lungs [ 4 ], a consequence of the virus. Other research areas include studying the genome sequence of the virus[ 5 ][ 6 ][ 7 ] and replicating its structure to fight and find a vaccine. This research is crucial in battling the pandemic. The few NLP based research publications are sentiment classification of online tweets by Samuel et el [ 8 ] to understand fear persisting in people due to the virus. Similar work has been done using the LSTM network to classify sentiments from online discussion forums by Jelodaret. al.[ 9 ]. NKK dataset is the first study on a comparatively larger dataset of a newspaper report on COVID-19, which contributed to the virus’s awareness to the best of our knowledge.
2 Data-set Introduction
2.1 Data Collection
We accumulated 1000 online newspaper report from United States of America (USA) on COVID-19. The newspaper includes The Washington Post (USA) and StarTribune (USA). We have named it as “Covid-News-USA-NNK”. We also accumulated 50 online newspaper report from Bangladesh on the issue and named it “Covid-News-BD-NNK”. The newspaper includes The Daily Star (BD) and Prothom Alo (BD). All these newspapers are from the top provider and top read in the respective countries. The collection was done manually by 10 human data-collectors of age group 23- with university degrees. This approach was suitable compared to automation to ensure the news were highly relevant to the subject. The newspaper online sites had dynamic content with advertisements in no particular order. Therefore there were high chances of online scrappers to collect inaccurate news reports. One of the challenges while collecting the data is the requirement of subscription. Each newspaper required $1 per subscriptions. Some criteria in collecting the news reports provided as guideline to the human data-collectors were as follows:
The headline must have one or more words directly or indirectly related to COVID-19.
The content of each news must have 5 or more keywords directly or indirectly related to COVID-19.
The genre of the news can be anything as long as it is relevant to the topic. Political, social, economical genres are to be more prioritized.
Avoid taking duplicate reports.
Maintain a time frame for the above mentioned newspapers.
To collect these data we used a google form for USA and BD. We have two human editor to go through each entry to check any spam or troll entry.
2.2 Data Pre-processing and Statistics
Some pre-processing steps performed on the newspaper report dataset are as follows:
Remove hyperlinks.
Remove non-English alphanumeric characters.
Remove stop words.
Lemmatize text.
While more pre-processing could have been applied, we tried to keep the data as much unchanged as possible since changing sentence structures could result us in valuable information loss. While this was done with help of a script, we also assigned same human collectors to cross check for any presence of the above mentioned criteria.
The primary data statistics of the two dataset are shown in Table 1 and 2.
Table 1: Covid-News-USA-NNK data statistics
No of words per headline
7 to 20
No of words per body content
150 to 2100
Table 2: Covid-News-BD-NNK data statistics No of words per headline
10 to 20
No of words per body content
100 to 1500
2.3 Dataset Repository
We used GitHub as our primary data repository in account name NKK^1. Here, we created two repositories USA-NKK^2 and BD-NNK^3. The dataset is available in both CSV and JSON format. We are regularly updating the CSV files and regenerating JSON using a py script. We provided a python script file for essential operation. We welcome all outside collaboration to enrich the dataset.
3 Literature Review
Natural Language Processing (NLP) deals with text (also known as categorical) data in computer science, utilizing numerous diverse methods like one-hot encoding, word embedding, etc., that transform text to machine language, which can be fed to multiple machine learning and deep learning algorithms.
Some well-known applications of NLP includes fraud detection on online media sites[ 10 ], using authorship attribution in fallback authentication systems[ 11 ], intelligent conversational agents or chatbots[ 12 ] and machine translations used by Google Translate[ 13 ]. While these are all downstream tasks, several exciting developments have been made in the algorithm solely for Natural Language Processing tasks. The two most trending ones are BERT[ 14 ], which uses bidirectional encoder-decoder architecture to create the transformer model, that can do near-perfect classification tasks and next-word predictions for next generations, and GPT-3 models released by OpenAI[ 15 ] that can generate texts almost human-like. However, these are all pre-trained models since they carry huge computation cost. Information Extraction is a generalized concept of retrieving information from a dataset. Information extraction from an image could be retrieving vital feature spaces or targeted portions of an image; information extraction from speech could be retrieving information about names, places, etc[ 16 ]. Information extraction in texts could be identifying named entities and locations or essential data. Topic modeling is a sub-task of NLP and also a process of information extraction. It clusters words and phrases of the same context together into groups. Topic modeling is an unsupervised learning method that gives us a brief idea about a set of text. One commonly used topic modeling is Latent Dirichlet Allocation or LDA[17].
Keyword extraction is a process of information extraction and sub-task of NLP to extract essential words and phrases from a text. TextRank [ 18 ] is an efficient keyword extraction technique that uses graphs to calculate the weight of each word and pick the words with more weight to it.
Word clouds are a great visualization technique to understand the overall ’talk of the topic’. The clustered words give us a quick understanding of the content.
4 Our experiments and Result analysis
We used the wordcloud library^4 to create the word clouds. Figure 1 and 3 presents the word cloud of Covid-News-USA- NNK dataset by month from February to May. From the figures 1,2,3, we can point few information:
In February, both the news paper have talked about China and source of the outbreak.
StarTribune emphasized on Minnesota as the most concerned state. In April, it seemed to have been concerned more.
Both the newspaper talked about the virus impacting the economy, i.e, bank, elections, administrations, markets.
Washington Post discussed global issues more than StarTribune.
StarTribune in February mentioned the first precautionary measurement: wearing masks, and the uncontrollable spread of the virus throughout the nation.
While both the newspaper mentioned the outbreak in China in February, the weight of the spread in the United States are more highlighted through out March till May, displaying the critical impact caused by the virus.
We used a script to extract all numbers related to certain keywords like ’Deaths’, ’Infected’, ’Died’ , ’Infections’, ’Quarantined’, Lock-down’, ’Diagnosed’ etc from the news reports and created a number of cases for both the newspaper. Figure 4 shows the statistics of this series. From this extraction technique, we can observe that April was the peak month for the covid cases as it gradually rose from February. Both the newspaper clearly shows us that the rise in covid cases from February to March was slower than the rise from March to April. This is an important indicator of possible recklessness in preparations to battle the virus. However, the steep fall from April to May also shows the positive response against the attack. We used Vader Sentiment Analysis to extract sentiment of the headlines and the body. On average, the sentiments were from -0.5 to -0.9. Vader Sentiment scale ranges from -1(highly negative to 1(highly positive). There were some cases
where the sentiment scores of the headline and body contradicted each other,i.e., the sentiment of the headline was negative but the sentiment of the body was slightly positive. Overall, sentiment analysis can assist us sort the most concerning (most negative) news from the positive ones, from which we can learn more about the indicators related to COVID-19 and the serious impact caused by it. Moreover, sentiment analysis can also provide us information about how a state or country is reacting to the pandemic. We used PageRank algorithm to extract keywords from headlines as well as the body content. PageRank efficiently highlights important relevant keywords in the text. Some frequently occurring important keywords extracted from both the datasets are: ’China’, Government’, ’Masks’, ’Economy’, ’Crisis’, ’Theft’ , ’Stock market’ , ’Jobs’ , ’Election’, ’Missteps’, ’Health’, ’Response’. Keywords extraction acts as a filter allowing quick searches for indicators in case of locating situations of the economy,
https://choosealicense.com/licenses/cc0-1.0/https://choosealicense.com/licenses/cc0-1.0/
Dataset Card for COVID News Articles (2020 - 2022)
Dataset Summary
The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. The dataset encapsulates… See the full description on the dataset page: https://huggingface.co/datasets/osanseviero/covid_news.
According to a survey conducted in March 2020, ** percent of U.S. news consumers said that they were seeking out the latest information about the coronavirus via news media in general, including TV news, radio news, online news, and newspapers. In fact, ** percent of adults aged 55 or above were getting most of their news about the virus this way, compared to just ** percent of ** to 24-year-olds who were more likely than their older peers to turn to websites or social media posts from government or health agencies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A corpus of readers' news comments posted below news articles on the topic of the covid-19 pandemic, published in major Serbian daily newspapers and news portals in the six-month early pandemic period (March 2020 to September 2020). The corpus is designed to facilitate research on crisis discourses, crisis communication, as well as pandemic-time linguistic innovation. It is available in plain text version and XML with full metadata. The corpus complements a separate corpus of news articles Serbian Coronavirus Corpus NewsSR. Parallel versions from Croatia and Slovenia are also available. The project leading to this publication has received funding from the European Union’s Horizon 2020 research and innovation programme under the H2020-EU.4. - SPREADING EXCELLENCE AND WIDENING PARTICIPATION programme Widening fellowships grant agreement No 101038047.
National newspapers, television, and radio were the go-to sources of COVID-19 news and information among Generation Z and Millennials as of ************, with a survey revealing that **** percent of the respondents selected these sources as the most used. Actively searching using search engines, as well as international media were among the other most preferred sources, followed by different types of social media content.
http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
Amidst the COVID-19 outbreak, the world is facing great crisis in every way. The value and things we built as a human race are going through tremendous challenges. It is a very small effort to bring curated data set on Novel Corona Virus to accelerate the forecasting and analytical experiments to cope up with this critical situation. It will help to visualize the country level out break and to keep track on regularly added new incidents.
This Dataset contains country wise public domain time series information on COVID-19 outbreak. The Data is sorted alphabetically on Country name and Date of Observation.
The data set contains the following columns:
ObservationDate: The date on which the incidents are observed
country: Country of the Outbreak
Confirmed: Number of confirmed cases till observation date
Deaths: Number of death cases till observation date
Recovered: Number of recovered cases till observation date
New Confirmed: Number of new confirmed cases on observation date
New Deaths: Number of New death cases on observation date
New Recovered: Number of New recovered cases on observation date
latitude: Latitude of the affected country
longitude: Longitude of the affected country
This data set is a cleaner version of the https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset data set with added geo location information and regularly added incident counts. I would like to thank this great effort by SRK.
Johns Hopkins University MoBS lab - https://www.mobs-lab.org/2019ncov.html World Health Organization (WHO): https://www.who.int/ DXY.cn. Pneumonia. 2020. http://3g.dxy.cn/newh5/view/pneumonia. BNO News: https://bnonews.com/index.php/2020/02/the-latest-coronavirus-cases/ National Health Commission of the People’s Republic of China (NHC): http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml China CDC (CCDC): http://weekly.chinacdc.cn/news/TrackingtheEpidemic.htm Hong Kong Department of Health: https://www.chp.gov.hk/en/features/102465.html Macau Government: https://www.ssm.gov.mo/portal/ Taiwan CDC: https://sites.google.com/cdc.gov.tw/2019ncov/taiwan?authuser=0 US CDC: https://www.cdc.gov/coronavirus/2019-ncov/index.html Government of Canada: https://www.canada.ca/en/public-health/services/diseases/coronavirus.html Australia Government Department of Health: https://www.health.gov.au/news/coronavirus-update-at-a-glance European Centre for Disease Prevention and Control (ECDC): https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases Ministry of Health Singapore (MOH): https://www.moh.gov.sg/covid-19 Italy Ministry of Health: http://www.salute.gov.it/nuovocoronavirus
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset contains the list of COVID Fake News/Claims which is shared all over the internet.
Content
Headlines: String attribute consisting of the headlines/fact shared.
Outcome: It is binary data where 0 means the headline is fake and 1 means that it is true.
Inspiration
In many research portals, there was this common question in which the combined fake news dataset is available or not. This led to the publication of this dataset.