Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The dataset has been collected in the frame of the Prac1 of the subject Tipology and Data Life Cycle of the Master's Degree in Data Science of the Universitat Oberta de Catalunya (UOC).
The dataset contains 25 variables and 52478 records corresponding to books on the GoodReads Best Books Ever list (the larges list on the site).
Original code used to retrieve the dataset can be found on github repository: github.com/scostap/goodreads_bbe_dataset
The data was retrieved in two sets, the first 30000 books and then the remainig 22478. Dates were not parsed and reformated on the second chunk so publishDate and firstPublishDate are representet in a mm/dd/yyyy format for the first 30000 records and Month Day Year for the rest.
Book cover images can be optionally downloaded from the url in the 'coverImg' field. Python code for doing so and an example can be found on the github repo.
The 25 fields of the dataset are:
| Attributes | Definition | Completeness |
| ------------- | ------------- | ------------- |
| bookId | Book Identifier as in goodreads.com | 100 |
| title | Book title | 100 |
| series | Series Name | 45 |
| author | Book's Author | 100 |
| rating | Global goodreads rating | 100 |
| description | Book's description | 97 |
| language | Book's language | 93 |
| isbn | Book's ISBN | 92 |
| genres | Book's genres | 91 |
| characters | Main characters | 26 |
| bookFormat | Type of binding | 97 |
| edition | Type of edition (ex. Anniversary Edition) | 9 |
| pages | Number of pages | 96 |
| publisher | Editorial | 93 |
| publishDate | publication date | 98 |
| firstPublishDate | Publication date of first edition | 59 |
| awards | List of awards | 20 |
| numRatings | Number of total ratings | 100 |
| ratingsByStars | Number of ratings by stars | 97 |
| likedPercent | Derived field, percent of ratings over 2 starts (as in GoodReads) | 99 |
| setting | Story setting | 22 |
| coverImg | URL to cover image | 99 |
| bbeScore | Score in Best Books Ever list | 100 |
| bbeVotes | Number of votes in Best Books Ever list | 100 |
| price | Book's price (extracted from Iberlibro) | 73 |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains article metadata and information about Open Science Indicators for approximately 139,000 research articles published in PLOS journals from 1 January 2018 to 30 March 2025 and a set of approximately 28,000 comparator articles published in non-PLOS journals. This is the tenth release of this dataset, which will be updated with new versions on an annual basis.This version of the Open Science Indicators dataset shares the indicators seen in the previous versions as well as fully operationalised protocols and study registration indicators, which were previously only shared in preliminary forms. The v10 dataset focuses on detection of five Open Science practices by analysing the XML of published research articles:Sharing of research data, in particular data shared in data repositoriesSharing of codePosting of preprintsSharing of protocolsSharing of study registrationsThe dataset provides data and code generation and sharing rates, the location of shared data and code (whether in Supporting Information or in an online repository). It also provides preprint, protocol and study registration sharing rates as well as details of the shared output, such as publication date, URL/DOI/Registration Identifier and platform used. Additional data fields are also provided for each article analysed. This release has been run using an updated preprint detection method (see OSI-Methods-Statement_v10_Jul25.pdf for details). Further information on the methods used to collect and analyse the data can be found in Documentation.Further information on the principles and requirements for developing Open Science Indicators is available in https://doi.org/10.6084/m9.figshare.21640889.Data folders/filesData Files folderThis folder contains the main OSI dataset files PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv, which containdescriptive metadata, e.g. article title, publication data, author countries, is taken from the article .xml filesadditional information around the Open Science Indicators derived algorithmicallyand the OSI-Summary-statistics_v10_Jul25.xlsx file contains the summary data for both PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv.Documentation folderThis file contains documentation related to the main data files. The file OSI-Methods-Statement_v10_Jul25.pdf describes the methods underlying the data collection and analysis. OSI-Column-Descriptions_v10_Jul25.pdf describes the fields used in PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv. OSI-Repository-List_v1_Dec22.xlsx lists the repositories and their characteristics used to identify specific repositories in the PLOS-Dataset_v10_Jul25.csv and Comparator-Dataset_v10_Jul25.csv repository fields.The folder also contains documentation originally shared alongside the preliminary versions of the protocols and study registration indicators in order to give fuller details of their detection methods.Contact details for further information:Iain Hrynaszkiewicz, Director, Open Research Solutions, PLOS, ihrynaszkiewicz@plos.org / plos@plos.orgLauren Cadwallader, Open Research Manager, PLOS, lcadwallader@plos.org / plos@plos.orgAcknowledgements:Thanks to Allegra Pearce, Tim Vines, Asura Enkhbayar, Scott Kerr and parth sarin of DataSeer for contributing to data acquisition and supporting information.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a publication of the CoAID dataset originaly dedicated to fake news detection. We changed here the purpose of this dataset in order to use it in the context of event tracking in press documents.
Cui, Limeng, et Dongwon Lee. 2020. « CoAID: COVID-19 Healthcare Misinformation Dataset ». ArXiv:2006.00885 [Cs], novembre. http://arxiv.org/abs/2006.00885.
In this dataset, we provide multiple features extracted from the text itself. Please note the text is missing from the dataset published in the CSV format for copyright reasons. You can download the original datasets and manually add the missing texts from the original publications.
Features are extracted using:
- A corpus of reference articles in multiple languages languages for TF-IDF weighting. (features_news) [1]
- A corpus of tweets reporting news for TF-IDF weighting. (features_tweets) [1]
- A S-BERT model [2] that uses distiluse-base-multilingual-cased-v1 (called features_use) [3]
- A S-BERT model [2] that uses paraphrase-multilingual-mpnet-base-v2 (called features_mpnet) [4]
References:
[1]: Guillaume Bernard. (2022). Resources to compute TF-IDF weightings on press articles and tweets (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6610406
[2]: Reimers, Nils, et Iryna Gurevych. 2019. « Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks ». In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982‑92. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1410.
[3]: https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1
[4]: https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Details: The INCLUDE dataset has 4292 videos (the paper mentions 4287 videos but 5 videos were added later). The videos used for training are mentioned in train.csv (3475), while that used for testing is mentioned in test.csv (817 files). Each video is a recording of 1 ISL sign, signed by deaf students from St. Louis School for the Deaf, Adyar, Chennai.
INCLUDE50 has 766 train videos and 192 test videos.
Train-Test Split: Please download the train-test split for INCLUDE and INCLUDE50 from here: Train-Test Split
Publication Link: https://dl.acm.org/doi/10.1145/3394171.3413528
AI4Bharat website: https://sign-language.ai4bharat.org/
Download Instructions
For ease of access, we have prepared a Shell Script to download all the parts of the dataset and extract them to form the complete INCLUDE dataset.
You can find the script here: http://bit.ly/include_dl
Paper Abstract: Indian Sign Language (ISL) is a complete language with its own grammar, syntax, vocabulary and several unique linguistic attributes. It is used by over 5 million deaf people in India. Currently, there is no publicly available dataset on ISL to evaluate Sign Language Recognition (SLR) approaches. In this work, we present the Indian Lexicon Sign Language Dataset - INCLUDE - an ISL dataset that contains 0.27 million frames across 4,287 videos over 263 word signs from 15 different word categories. INCLUDE is recorded with the help of experienced signers to provide close resemblance to natural conditions. A subset of 50 word signs is chosen across word categories to define INCLUDE-50 for rapid evaluation of SLR methods with hyperparameter tuning. The best performing model achieves an accuracy of 94.5% on the INCLUDE-50 dataset and 85.6% on the INCLUDE dataset
A cultural and social history of language cannot be written without a broad range of primary sources. For a history of French in Russia, some documents are already available in published form, but a great number of relevant documents from Russian and some other archives are still to be published. Even documents which have been published have rarely been analysed in terms of the cultural and social history of French understood as a lingua franca, a prestige language, a medium for new cultural values and notions, a tool of cultural propaganda and so forth. Our purpose in collecting these documents is therefore to provide the beginnings of a corpus on which such a history can be based. The dataset contains pairs of documents. In each pair there is (i) a text, or excerpts from a text, or a set of texts or excerpts (with our editorial notes) and (ii) an essay (also with notes) which introduces the text(s) in question. The collection of texts published here represents the first batch of a small sample of the innumerable primary sources available for a multidisciplinary study of the history of the French language in Russia. Complete download (zip, 18.8 MiB)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a publication of the FibVid dataset originaly dedicated to fake news detection. We changed here the purpose of this dataset in order to use it in the context of event tracking in press documents. Kim, Jisu, Jihwan Aum, SangEun Lee, Yeonju Jang, Eunil Park, et Daejin Choi. 2021. « FibVID: Comprehensive Fake News Diffusion Dataset during the COVID-19 Period ». Telematics and Informatics 64 (novembre): 101688. https://doi.org/10.1016/j.tele.2021.101688. In this dataset, we provide multiple features extracted from the text itself. Please note the text is missing from the dataset published in the CSV format for copyright reasons. You can download the original datasets and manually add the missing texts from the original publications. Features are extracted using: - A corpus of reference articles in multiple languages languages for TF-IDF weighting. (features_news) [1] - A corpus of tweets reporting news for TF-IDF weighting. (features_tweets) [1] - A S-BERT model [2] that uses distiluse-base-multilingual-cased-v1 (called features_use) [3] - A S-BERT model [2] that uses paraphrase-multilingual-mpnet-base-v2 (called features_mpnet) [4] References: [1]: Guillaume Bernard. (2022). Resources to compute TF-IDF weightings on press articles and tweets (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6610406 [2]: Reimers, Nils, et Iryna Gurevych. 2019. « Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks ». In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982‑92. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1410. [3]: https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1 [4]: https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
LScD (Leicester Scientific Dictionary)April 2020 by Neslihan Suzen, PhD student at the University of Leicester (ns433@leicester.ac.uk/suzenneslihan@hotmail.com)Supervised by Prof Alexander Gorban and Dr Evgeny Mirkes[Version 3] The third version of LScD (Leicester Scientific Dictionary) is created from the updated LSC (Leicester Scientific Corpus) - Version 2*. All pre-processing steps applied to build the new version of the dictionary are the same as in Version 2** and can be found in description of Version 2 below. We did not repeat the explanation. After pre-processing steps, the total number of unique words in the new version of the dictionary is 972,060. The files provided with this description are also same as described as for LScD Version 2 below.* Suzen, Neslihan (2019): LSC (Leicester Scientific Corpus). figshare. Dataset. https://doi.org/10.25392/leicester.data.9449639.v2** Suzen, Neslihan (2019): LScD (Leicester Scientific Dictionary). figshare. Dataset. https://doi.org/10.25392/leicester.data.9746900.v2[Version 2] Getting StartedThis document provides the pre-processing steps for creating an ordered list of words from the LSC (Leicester Scientific Corpus) [1] and the description of LScD (Leicester Scientific Dictionary). This dictionary is created to be used in future work on the quantification of the meaning of research texts. R code for producing the dictionary from LSC and instructions for usage of the code are available in [2]. The code can be also used for list of texts from other sources, amendments to the code may be required.LSC is a collection of abstracts of articles and proceeding papers published in 2014 and indexed by the Web of Science (WoS) database [3]. Each document contains title, list of authors, list of categories, list of research areas, and times cited. The corpus contains only documents in English. The corpus was collected in July 2018 and contains the number of citations from publication date to July 2018. The total number of documents in LSC is 1,673,824.LScD is an ordered list of words from texts of abstracts in LSC.The dictionary stores 974,238 unique words, is sorted by the number of documents containing the word in descending order. All words in the LScD are in stemmed form of words. The LScD contains the following information:1.Unique words in abstracts2.Number of documents containing each word3.Number of appearance of a word in the entire corpusProcessing the LSCStep 1.Downloading the LSC Online: Use of the LSC is subject to acceptance of request of the link by email. To access the LSC for research purposes, please email to ns433@le.ac.uk. The data are extracted from Web of Science [3]. You may not copy or distribute these data in whole or in part without the written consent of Clarivate Analytics.Step 2.Importing the Corpus to R: The full R code for processing the corpus can be found in the GitHub [2].All following steps can be applied for arbitrary list of texts from any source with changes of parameter. The structure of the corpus such as file format and names (also the position) of fields should be taken into account to apply our code. The organisation of CSV files of LSC is described in README file for LSC [1].Step 3.Extracting Abstracts and Saving Metadata: Metadata that include all fields in a document excluding abstracts and the field of abstracts are separated. Metadata are then saved as MetaData.R. Fields of metadata are: List_of_Authors, Title, Categories, Research_Areas, Total_Times_Cited and Times_cited_in_Core_Collection.Step 4.Text Pre-processing Steps on the Collection of Abstracts: In this section, we presented our approaches to pre-process abstracts of the LSC.1.Removing punctuations and special characters: This is the process of substitution of all non-alphanumeric characters by space. We did not substitute the character “-” in this step, because we need to keep words like “z-score”, “non-payment” and “pre-processing” in order not to lose the actual meaning of such words. A processing of uniting prefixes with words are performed in later steps of pre-processing.2.Lowercasing the text data: Lowercasing is performed to avoid considering same words like “Corpus”, “corpus” and “CORPUS” differently. Entire collection of texts are converted to lowercase.3.Uniting prefixes of words: Words containing prefixes joined with character “-” are united as a word. The list of prefixes united for this research are listed in the file “list_of_prefixes.csv”. The most of prefixes are extracted from [4]. We also added commonly used prefixes: ‘e’, ‘extra’, ‘per’, ‘self’ and ‘ultra’.4.Substitution of words: Some of words joined with “-” in the abstracts of the LSC require an additional process of substitution to avoid losing the meaning of the word before removing the character “-”. Some examples of such words are “z-test”, “well-known” and “chi-square”. These words have been substituted to “ztest”, “wellknown” and “chisquare”. Identification of such words is done by sampling of abstracts form LSC. The full list of such words and decision taken for substitution are presented in the file “list_of_substitution.csv”.5.Removing the character “-”: All remaining character “-” are replaced by space.6.Removing numbers: All digits which are not included in a word are replaced by space. All words that contain digits and letters are kept because alphanumeric characters such as chemical formula might be important for our analysis. Some examples are “co2”, “h2o” and “21st”.7.Stemming: Stemming is the process of converting inflected words into their word stem. This step results in uniting several forms of words with similar meaning into one form and also saving memory space and time [5]. All words in the LScD are stemmed to their word stem.8.Stop words removal: Stop words are words that are extreme common but provide little value in a language. Some common stop words in English are ‘I’, ‘the’, ‘a’ etc. We used ‘tm’ package in R to remove stop words [6]. There are 174 English stop words listed in the package.Step 5.Writing the LScD into CSV Format: There are 1,673,824 plain processed texts for further analysis. All unique words in the corpus are extracted and written in the file “LScD.csv”.The Organisation of the LScDThe total number of words in the file “LScD.csv” is 974,238. Each field is described below:Word: It contains unique words from the corpus. All words are in lowercase and their stem forms. The field is sorted by the number of documents that contain words in descending order.Number of Documents Containing the Word: In this content, binary calculation is used: if a word exists in an abstract then there is a count of 1. If the word exits more than once in a document, the count is still 1. Total number of document containing the word is counted as the sum of 1s in the entire corpus.Number of Appearance in Corpus: It contains how many times a word occurs in the corpus when the corpus is considered as one large document.Instructions for R CodeLScD_Creation.R is an R script for processing the LSC to create an ordered list of words from the corpus [2]. Outputs of the code are saved as RData file and in CSV format. Outputs of the code are:Metadata File: It includes all fields in a document excluding abstracts. Fields are List_of_Authors, Title, Categories, Research_Areas, Total_Times_Cited and Times_cited_in_Core_Collection.File of Abstracts: It contains all abstracts after pre-processing steps defined in the step 4.DTM: It is the Document Term Matrix constructed from the LSC[6]. Each entry of the matrix is the number of times the word occurs in the corresponding document.LScD: An ordered list of words from LSC as defined in the previous section.The code can be used by:1.Download the folder ‘LSC’, ‘list_of_prefixes.csv’ and ‘list_of_substitution.csv’2.Open LScD_Creation.R script3.Change parameters in the script: replace with the full path of the directory with source files and the full path of the directory to write output files4.Run the full code.References[1]N. Suzen. (2019). LSC (Leicester Scientific Corpus) [Dataset]. Available: https://doi.org/10.25392/leicester.data.9449639.v1[2]N. Suzen. (2019). LScD-LEICESTER SCIENTIFIC DICTIONARY CREATION. Available: https://github.com/neslihansuzen/LScD-LEICESTER-SCIENTIFIC-DICTIONARY-CREATION[3]Web of Science. (15 July). Available: https://apps.webofknowledge.com/[4]A. Thomas, "Common Prefixes, Suffixes and Roots," Center for Development and Learning, 2013.[5]C. Ramasubramanian and R. Ramya, "Effective pre-processing activities in text mining using improved porter’s stemming algorithm," International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, no. 12, pp. 4536-4538, 2013.[6]I. Feinerer, "Introduction to the tm Package Text Mining in R," Accessible en ligne: https://cran.r-project.org/web/packages/tm/vignettes/tm.pdf, 2013.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a republication of the Event Registry dataset originaly published by:
Rupnik, Jan, Andrej Muhic, Gregor Leban, Primoz Skraba, Blaz Fortuna, et Marko Grobelnik. 2016. « News Across Languages - Cross-Lingual Document Similarity and Event Tracking ». Journal of Artificial Intelligence Research 55 (janvier): 283‑316. https://doi.org/10.1613/jair.4780.
And reorganised for document tracking by:
Miranda, Sebastião, Artūrs Znotiņš, Shay B. Cohen, et Guntis Barzdins. 2018. « Multilingual Clustering of Streaming News ». In 2018 Conference on Empirical Methods in Natural Language Processing, 4535‑44. Brussels, Belgium: Association for Computational Linguistics. https://www.aclweb.org/anthology/D18-1483/.
In this dataset, we provide multiple features extracted from the text itself. Please note the text is missing from the dataset published in the CSV format for copyright reasons. You can download the original datasets and manually add the missing texts from the original publications.
Features are extracted using:
A corpus of reference articles in multiple languages languages for TF-IDF weighting. (features_news) [1]
A corpus of tweets reporting news for TF-IDF weighting. (features_tweets) [1]
A S-BERT model [2] that uses distiluse-base-multilingual-cased-v1 (called features_use) 3
A S-BERT model [2] that uses paraphrase-multilingual-mpnet-base-v2 (called features_mpnet) 4
References:
[1]: Guillaume Bernard. (2022). Resources to compute TF-IDF weightings on press articles and tweets (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6610406
[2]: Reimers, Nils, et Iryna Gurevych. 2019. « Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks ». In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982‑92. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1410.
ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
This dataset contains posts from 28 subreddits (15 mental health support groups) from 2018-2020. We used this dataset to understand the impact of COVID-19 on mental health support groups from January to April, 2020 and included older timeframes to obtain baseline posts before COVID-19.
Please cite if you use this dataset:
Low, D. M., Rumker, L., Torous, J., Cecchi, G., Ghosh, S. S., & Talkar, T. (2020). Natural Language Processing Reveals Vulnerable Mental Health Support Groups and Heightened Health Anxiety on Reddit During COVID-19: Observational Study. Journal of medical Internet research, 22(10), e22635.
@article{low2020natural, title={Natural Language Processing Reveals Vulnerable Mental Health Support Groups and Heightened Health Anxiety on Reddit During COVID-19: Observational Study}, author={Low, Daniel M and Rumker, Laurie and Torous, John and Cecchi, Guillermo and Ghosh, Satrajit S and Talkar, Tanya}, journal={Journal of medical Internet research}, volume={22}, number={10}, pages={e22635}, year={2020}, publisher={JMIR Publications Inc., Toronto, Canada} }
License
This dataset is made available under the Public Domain Dedication and License v1.0 whose full text can be found at: http://www.opendatacommons.org/licenses/pddl/1.0/
It was downloaded using pushshift API. Re-use of this data is subject to Reddit API terms.
Reddit Mental Health Dataset
Contains posts and text features for the following timeframes from 28 mental health and non-mental health subreddits:
filenames
and corresponding timeframes:
post:
Jan 1 to April 20, 2020 (called "mid-pandemic" in manuscript; r/COVID19_support appears). Unique users: 320,364. pre:
Dec 2018 to Dec 2019. A full year which provides more data for a baseline of Reddit posts. Unique users: 327,289.2019:
Jan 1 to April 20, 2019 (r/EDAnonymous appears). A control for seasonal fluctuations to match post
data. Unique users: 282,560.2018:
Jan 1 to April 20, 2018. A control for seasonal fluctuations to match post
data. Unique users: 177,089Unique users across all time windows (pre and 2019 overlap): 826,961.
See manuscript Supplementary Materials (https://doi.org/10.31234/osf.io/xvwcy) for more information.
Note: if subsampling (e.g., to balance subreddits), we recommend bootstrapping analyses for unbiased results.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sports Facilities in County Roscommon. Dataset Publisher: Roscommon County Council, Dataset language: English, Spatial Projection: Web Mercator, Date of Creation: 2011, Update Frequency: As Required. Roscommon County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Roscommon County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Community Facilities in County Roscommon including Community Centres, Youth Facilities, and Education and Development Facilities. Dataset Publisher: Roscommon County Council, Dataset language: English, Spatial Projection: Web Mercator, Date of Creation: 2011, Update Frequency: As Required. Roscommon County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Roscommon County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
(see https://tblock.github.io/10kGNAD/ for the original dataset page)
This page introduces the 10k German News Articles Dataset (10kGNAD) german topic classification dataset. The 10kGNAD is based on the One Million Posts Corpus and avalaible under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. You can download the dataset here.
English text classification datasets are common. Examples are the big AG News, the class-rich 20 Newsgroups and the large-scale DBpedia ontology datasets for topic classification and for example the commonly used IMDb and Yelp datasets for sentiment analysis. Non-english datasets, especially German datasets, are less common. There is a collection of sentiment analysis datasets assembled by the Interest Group on German Sentiment Analysis. However, to my knowlege, no german topic classification dataset is avaliable to the public.
Due to grammatical differences between the English and the German language, a classifyer might be effective on a English dataset, but not as effectiv on a German dataset. The German language has a higher inflection and long compound words are quite common compared to the English language. One would need to evaluate a classifyer on multiple German datasets to get a sense of it's effectivness.
The 10kGNAD dataset is intended to solve part of this problem as the first german topic classification dataset. It consists of 10273 german language news articles from an austrian online newspaper categorized into nine topics. These articles are a till now unused part of the One Million Posts Corpus.
In the One Million Posts Corpus each article has a topic path. For example Newsroom/Wirtschaft/Wirtschaftpolitik/Finanzmaerkte/Griechenlandkrise
.
The 10kGNAD uses the second part of the topic path, here Wirtschaft
, as class label.
In result the dataset can be used for multi-class classification.
I created and used this dataset in my thesis to train and evaluate four text classifyers on the German language. By publishing the dataset I hope to support the advancement of tools and models for the German language. Additionally this dataset can be used as a benchmark dataset for german topic classification.
As in most real-world datasets the class distribution of the 10kGNAD is not balanced. The biggest class Web consists of 1678, while the smalles class Kultur contains only 539 articles. However articles from the Web class have on average the fewest words, while artilces from the culture class have the second most words.
I propose a stratifyed split of 10% for testing and the remaining articles for training.
To use the dataset as a benchmark dataset, please used the train.csv
and test.csv
files located in the project root.
Python scripts to extract the articles and split them into a train- and a testset avaliable in the code directory of this project.
Make sure to install the requirements.
The original corpus.sqlite3
is required to extract the articles (download here (compressed) or here (uncompressed)).
This dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Please consider citing the authors of the One Million Post Corpus if you use the dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data from Roscommon County Council’s Annual Budget. The budget is comprised of Tables A to F and Appendix 1. Each table is represented by a separate data file. Dataset Name: Budget 2018: Appendix 1, Dataset Publisher: Roscommon County Council, Dataset Language: English, Date of Creation: November 2017, Last Updated: November 2017, Update Frequency : Annual, The published annual budget document can be viewed at http://www.roscommoncoco.ie/en/Download-It/Finance-Publications/Annual_Budget/Appendix 1 is the Summary of the Central Management Charge. It contains: ‘Expenditure’ for each Central Management ‘Service’ for the Budget Year, ‘Expenditure’ for each Central Management ‘Service’ for the previous Year, Data fields for Appendix 1 are as follows – Heading : Indicates sections in the Table – Appendix 1 is comprised of one section, therefore Heading value for all records = 1, Ref : Service Reference, Desc : Service Description, CY_Exp : Expenditure for Budget Year, PY_Exp : Expenditure for previous Financial Year, Roscommon County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Roscommon County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Playgrounds owned and managed by Clare County Council Dataset Publisher: Clare County Council Dataset language: English Spatial Projection: Web Mercator Date of Creation: 2020 Update Frequency: As Required Clare County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Clare County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.
The Armed Conflict Location & Event Data Project (ACLED) is a US-registered non-profit whose mission is to provide the highest quality real-time data on political violence and demonstrations globally. The information collected includes the type of event, its date, the location, the actors involved, a brief narrative summary, and any reported fatalities. ACLED users rely on our robust global dataset to support decision-making around policy and programming, accurately analyze political and country risk, support operational security planning, and improve supply chain management.ACLED’s transparent methodology, expert team composed of 250 individuals speaking more than 70 languages, real-time coding system, and weekly update schedule are unrivaled in the field of data collection on conflict and disorder. Global Coverage: We track political violence, demonstrations, and strategic developments around the world, covering more than 240 countries and territories.Published Weekly: Our data are collected in real time and published weekly. It is the only dataset of its kind to provide such a high update frequency, with peer datasets most often updating monthly or yearly.Historical Data: Our dataset contains at least two full years of data for all countries and territories, with more extensive coverage available for multiple regions.Experienced Researchers: Our data are coded by experienced researchers with local, country, and regional expertise and language skills.Thorough Data Collection and Sourcing: Pulling from traditional media, reports, local partner data, and verified new media, ACLED uses a tailor-made sourcing methodology for individual regions/countries.Extensive Review Process: Our data go through an exhaustive multi-stage quality assurance process to ensure their accuracy and reliability. This process includes both manual and automated error checking and contextual review.Clean, Standardized, and Validated: Our data can be easily connected with internal dashboards through our API or downloaded through the Data Export Tool on our website.Resources Available on ESRI’s Living AtlasACLED data are available through the Living Atlas for the most recent 12 month period. The data are mapped to the centroid of first administrative divisions (“admin1”) within countries (e.g., states, districts, provinces) and aggregated by month. Variables in the data include:The number of events per admin1-month, disaggregated by event type (protests, riots, battles, violence against civilians, explosions/remote violence, and strategic developments)A conservative estimate of reported fatalities per admin1-monthThe total number of distinct violent actors active in the corresponding admin1 for each monthThis Living Atlas item is a Web Map, which provides a pre-configured view of ACLED event data in a few layers:ACLED Event Counts layer: events per admin1-month, styled by predominant event type for each location.ACLED Violent Actors layer: the number of distinct violent actors per admin1-month.ACLED Fatality Estimates layer: the estimated number of fatalities from political violence per admin1-month.These layers are based on the ACLED Conflict and Demonstrations Event Data Feature Layer, which has the same data but only a basic default styling that is similar to the Event Counts layer. The Web Map layers are configured with a time-slider component to account for the multiple months of data per admin1 unit. These indicators are also available in the ACLED Conflict and Demonstrations Data Key Indicators Group Layer, which includes the same preconfigured layers but without the time-slider component or background layers.Resources Available on the ACLED WebsiteThe fully disaggregated dataset is available for download on ACLED's website including:Date (day, month, year)Actors, associated actors, and actor typesLocation information (ADMIN1, ADMIN2, ADMIN3, location and geo coordinates)A conservative fatality estimateDisorder type, event types, and sub-event typesTags further categorizing the data A notes column providing a narrative of the event For more information, please see the ACLED Codebook.To explore ACLED’s full dataset, please register on the ACLED Access Portal, following the instructions available in this Access Guide. Upon registration, you’ll receive access to ACLED data on a limited basis. Commercial users have access to 3 free data downloads company-wide with access to up to one year of historical data. Public sector users have access to 6 downloads of up to three years of historical data organization-wide. To explore options for extended access, please reach out to our Access Team (access@acleddata.com).With an ACLED license, users can also leverage ACLED’s interactive Global Dashboard and check in for weekly data updates and analysis tracking key political violence and protest trends around the world. ACLED also has several analytical tools available such as our Early Warning Dashboard, Conflict Alert System (CAST), and Conflict Index Dashboard.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Clare County Council Departments Dataset Publisher: Clare County Council Dataset language: English Spatial Projection: Web Mercator Date of Creation: 2019 Update Frequency: As Required Clare County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Clare County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Law 190 2012 requires public administrations and public controlled companies to publish data regarding the contracting tendering procedures. The data must be published on the institution’s website (or in the case of a publicly owned company that does not have a website on the website of the parent administration) within the sections “Amministrazione Trasparente” or “Società Trasparente”, for public administrations and publicly owned or controlled corporations respectively in XML format.
The main defect of the regulation is that the data are not collected in a single database but remain disseminated on the websites of public administrations, which hinders large-scale analysis. The following database wants to fill this gap by aggregating all the information provided by all the subjects to the law 190/2012.
The aforementioned set of regulations requires public administrations and public-controlled companies to publish data regarding the contracting tendering procedures. The law gives indications of a content nature, leaving the technical aspects to the implementing decrees. In general, it is important to notice that the data must be published on the institution’s website (or in the case of a publicly owned company that does not have a website on the website of the parent administration) within the sections “Amministrazione Trasparente” or “Società Trasparente”, for public administrations and publicly owned or controlled corporations respectively.
The legislator indicates the XML as the format to be used. The use of XML, which is the same used for the electronic invoicing system recently implemented, presents several advantages like the fact that it is platform and programming language independent, supporting technological changes when that happens. Moreover, the data stored and transported using XML can be changed at any point in time without affecting the data presentation. Although the technical advantages brought by XML, from a citizen perspective (with very low data literacy) it presents some drawbacks. Despite XML format being human-readable and some browsers can open it, it is harder to manipulate and query compared to a more popular format like an excel file. To overcome this obstacle it is desirable that each website of the public administration is equipped with an interface that allows citizens to view the information contained in these files and that to conduct simple queries. Unfortunately, from a high-level overview, this best practice seems not yet widespread.
Another important aspect is that the data are not collected in a single database but remain disseminated on the websites of public administrations. This organizational framework allows citizens and investigative journalists to inspect the data only within the perimeter of a single institution (public administration or publicly owned company) while extending the analysis on other dimensions remains out of reach for the average user. For example, while it is relatively easy to see all the tenders with the relative winner tenderers of a given institution it is not possible without further processing to see all the winning bids of a tenderer, or the details of all published tenders in a given geographical region. In general, a systematic analysis of the state of health of public tenders is not possible without intensive prior aggregation and processing activity.
From here derives the need for a comprehensive database that contains all the information provided by all the subjects to the law 190/2012. Unfortunately, this database is not provided by the public authority nor by the controlling authority ANAC. Hence the creation of this dataset
As we already mentioned ANAC does not collect and organize the data required by the 190/2012 law in a single database, however, each law’s subject has to send each year a pec email (certified email) to ANAC that contains the link of the requested XML file stored in the institutional website or in a third-party provider.
ANAC conducts an access test to check that the link provided by the public administration is functioning and that the file complies with the prescribed XML structure. Then ANAC published the result of this activity in its open data section by showing the generalities of the institutions, the result of the test (pass/failed), and the link to the file.
It is noteworthy that ANAC does not actively persecute those institutions that present irregularities in the data provided nor check whether all the institutions provide the requested data. ANAC perimeter of action in this sense is limited to the publication of test data while it is the responsibility of the public administration to check the outcome and resolve any rep...
Monksland Bellanamullia Local Area Plan Zonings (2016-2022).Dataset Publisher: Forward Planning Section, Roscommon County Council, Dataset language: English, Spatial Projection: Web Mercator, Date of Creation: 2016, Last Updated: 2016, Update Frequency: As Required, Roscommon County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Roscommon County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This repository contains the results of three benchmarks that compare natural language understanding services offering: 1. built-in intents (Apple’s SiriKit, Amazon’s Alexa, Microsoft’s Luis, Google’s API.ai, and Snips.ai) on a selection of various intents. This benchmark was performed in December 2016. Its results are described in length in the following post. 2. custom intent engines (Google's API.ai, Facebook's Wit, Microsoft's Luis, Amazon's Alexa, and Snips' NLU) for seven chosen intents. This benchmark was performed in June 2017. Its results are described in a paper and a blog post. 3. extension of Braun et al., 2017 (Google's API.AI, Microsoft's Luis, IBM's Watson, Rasa) This experiment replicates the analysis made by Braun et al., 2017, published in Evaluating Natural Language Understanding Services for Conversational Question Answering Systems as part of SIGDIAL 2017 proceedings. Snips and Rasa are added. Details are available in a paper and a blog post.
The data is provided for each benchmark and more details about the methods are available in the README file in each folder.
Any publication based on these datasets must include a full citation to the following paper in which the results were published by the Snips Team:
"https://arxiv.org/abs/1805.10190">Coucke A. et al., "Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces." 2018,
accepted for a spotlight presentation at the Privacy in Machine Learning and Artificial Intelligence workshop colocated with ICML 2018.
The Snips team has joined Sonos in November 2019. These open datasets remain available and their access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This repository contains the PG-19 dataset. It includes a set of books extracted from the Project Gutenberg books project (https://www.gutenberg.org), that were published before 1919. It also contains metadata of book titles and publication dates.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The dataset has been collected in the frame of the Prac1 of the subject Tipology and Data Life Cycle of the Master's Degree in Data Science of the Universitat Oberta de Catalunya (UOC).
The dataset contains 25 variables and 52478 records corresponding to books on the GoodReads Best Books Ever list (the larges list on the site).
Original code used to retrieve the dataset can be found on github repository: github.com/scostap/goodreads_bbe_dataset
The data was retrieved in two sets, the first 30000 books and then the remainig 22478. Dates were not parsed and reformated on the second chunk so publishDate and firstPublishDate are representet in a mm/dd/yyyy format for the first 30000 records and Month Day Year for the rest.
Book cover images can be optionally downloaded from the url in the 'coverImg' field. Python code for doing so and an example can be found on the github repo.
The 25 fields of the dataset are:
| Attributes | Definition | Completeness |
| ------------- | ------------- | ------------- |
| bookId | Book Identifier as in goodreads.com | 100 |
| title | Book title | 100 |
| series | Series Name | 45 |
| author | Book's Author | 100 |
| rating | Global goodreads rating | 100 |
| description | Book's description | 97 |
| language | Book's language | 93 |
| isbn | Book's ISBN | 92 |
| genres | Book's genres | 91 |
| characters | Main characters | 26 |
| bookFormat | Type of binding | 97 |
| edition | Type of edition (ex. Anniversary Edition) | 9 |
| pages | Number of pages | 96 |
| publisher | Editorial | 93 |
| publishDate | publication date | 98 |
| firstPublishDate | Publication date of first edition | 59 |
| awards | List of awards | 20 |
| numRatings | Number of total ratings | 100 |
| ratingsByStars | Number of ratings by stars | 97 |
| likedPercent | Derived field, percent of ratings over 2 starts (as in GoodReads) | 99 |
| setting | Story setting | 22 |
| coverImg | URL to cover image | 99 |
| bbeScore | Score in Best Books Ever list | 100 |
| bbeVotes | Number of votes in Best Books Ever list | 100 |
| price | Book's price (extracted from Iberlibro) | 73 |