https://choosealicense.com/licenses/unlicense/https://choosealicense.com/licenses/unlicense/
English Valid Words
This repository contains CSV files with valid English words along with their frequency, stem, and stem valid probability. Dataset Github link: https://github.com/Maximax67/English-Valid-Words
Files included
valid_words_sorted_alphabetically.csv:
N: Counter for each word entry. Word: The English word itself. Frequency count: The number of occurrences of the word in the 1-grams dataset. Stem: The stem of the word. Stem valid probability: Probability… See the full description on the dataset page: https://huggingface.co/datasets/Maximax67/English-Valid-Words.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A compressed folder containing 31075 numerical vectors. Each one represents word frequencies of an EBook from Project Gutenberg written in English. Vectors are named containing the ID number of their corresponding text in Project Gutenberg.
Word frequency is an important variable in cognitive processing. High-frequency words are perceived and produced faster and more efficiently than low-frequency words. At the same time, they are easier to recall but more difficult to recognize in episodic memory tasks.
Brysbaert & New compiled a new frequency measure on the basis of American subtitles (51 million words in total). There are two measures:
This data set is taken from the Ghent University "SUBTLEXUS American Word Frequency" list compiled by Brysbaert & New. SUBTLEXUS website Brysbaert & New full analysis paper
How frequently a word occurs in a language is an important piece of information for natural language processing and linguists. In natural language processing, very frequent words tend to be less informative than less frequent one and are often removed during preprocessing. Human language users are also sensitive to word frequency. How often a word is used affects language processing in humans. For example, very frequent words are read and understood more quickly and can be understood more easily in background noise.
This dataset contains the counts of the 333,333 most commonly-used single words on the English language web, as derived from the Google Web Trillion Word Corpus.
Data files were derived from the Google Web Trillion Word Corpus (as described by Thorsten Brants and Alex Franz, and distributed by the Linguistic Data Consortium) by Peter Norvig. You can find more information on these files and the code used to generate them here.
The code used to generate this dataset is distributed under the MIT License.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Written word frequency is a key variable used in many psycholinguistic studies and is central in explaining visual word recognition. Indeed, methodological advances on single word frequency estimates have helped to uncover novel language-related cognitive processes, fostering new ideas and studies. In an attempt to support and promote research on a related emerging topic, visual multi-word recognition, we extracted from the exhaustive Google Ngram datasets a selection of millions of multi-word sequences and computed their associated frequency estimate. Such sequences are presented with Part-of-Speech information for each individual word. An online behavioral investigation making use of the French 4-gram lexicon in a grammatical decision task was carried out. The results show an item-level frequency effect of word sequences. Moreover, the proposed datasets were found useful during the stimulus selection phase, allowing more precise control of the multi-word characteristics.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Frequency count of the Brown corpus of present day American English — Brown corpus of present day American EnglishAvailable with prior consent of depositor for research purposes only
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Database of English words categorized by usage frequency
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The current database contains English words which appear in Croatian in their original, unadapted form (e.g. show, boxer, zombie, skin, etc.). The list of words is based on The Database of English words in Croatian (Bogunović & Kučić 2022; https://repository.pfri.uniri.hr/islandora/object/pfri:2495), and was further complemented with words obtained from the corpus hrWaC (Ljubešić & Erjavec 2011; Ljubešić & Klubička 2014) using the platform SketchEngine (Kilgarriff et al. 2004). The same platform was used to check the list of English words against the corpora ENGRI (Bogunović et al. 2021; Bogunović & Kučić 2021) i hrWaC by consulting concordances and using CQL. The tagger Xf was used to filter out all English sentences embedded in Croatian texts. Corpus results were then manually checked using the random sample and filter tools to remove e.g., proper nouns, false cognate, false pairs, etc. The database also lists Croatian equivalents (and corresponding frequencies in the corpora) for each English word if they exist in Croatian. The choice of the Croatian equivalent depended greatly on the available corpus data on word frequency as well as Croatian online dictionaries. Furthermore, single-word and multi-word English expressions are represented separately in the database for reasons of visual transparency and simplification of word search. We would like to stress that the database by no means represents a final product and is not a definite representation of data on English words in Croatian, but is, however, representative of their current status in the Croatian language. Further efforts will be made to update the database and incorporate new data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains English words in column B. Corresponding to each word the other columns contain its frequency(fre), length(len), parts of speech(PS), the number of undergraduate students which marked it difficult (difficult_ug) and the number of postgraduate students which marked it difficult (difficult_pg).The dataset has a total of 5368 unique words. The words marked as difficult by undergraduate students are 680; and those marked as difficult by postgraduate students are 151; all the remaining words, viz., 4537, are easy and hence are not marked as difficult either by undergraduate and postgraduate students. The word against which there is hyphen (-) in difficult_ug column means that this word is not present in the text circulated to undergraduate students. Likewise hyphen(-) in difficult_pg column means words not present in text circulated to postgraduate students. The data is collected from the students of Jammu and Kashmir (a Union Territory of India). Latitude and Longitude (32.2778° N, 75.3412° E)
The description of files attached is as:
The dataset_english CSV file is the original dataset containing English words, its length, frequency, Parts of speech, number of undergraduate and postgraduate students which marked the particular words as difficult.
The dataset_numerical CSV file contains the original dataset along with string fields transformed into numerical.
The English language difficulty level measurement -Questionnaire (1-6) & PG1,PG2,PG3,PG4 .docx files contains the questionnaire supplied to students of College and University to underline difficult words in the English text.
IGNOU English.zip file contains the Indra Gandhi National Open University (IGNOU) English text books for graduation and post graduation students. The text for above questionnaires were taken from these IGNOU English text books.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset offers a rich collection of over 600,000 unique Tanglish words and their cleaned forms. These words were extracted from a large body of more than 650,000 comments and transcripts gathered from 1,260 videos. It serves as a valuable resource for Natural Language Processing (NLP) tasks, particularly those involving Tamil-English mixed text, often referred to as "Tanglish." Key features include a substantial lexicon, preprocessed and cleaned text to ensure high-quality inputs for machine learning, and specific focus on Tamil-English text, making it useful for multilingual and low-resource NLP research. It is applicable to tasks such as text classification, sentiment analysis, and transliteration.
The dataset is typically provided in a CSV format. It comprises over 600,000 unique Tanglish words, derived from over 650,000 comments and transcripts. While the exact number of rows in the full dataset is not specified, it represents a substantial collection of word-frequency pairs. The sample provided shows a structure of word and its corresponding count. The dataset was listed on 08/06/2025.
This dataset is ideal for various applications and use cases, including: * Building and refining language models tailored for Tanglish. * Creating datasets for machine translation and transliteration projects. * Advancing linguistic studies focused on code-switching and low-resource languages. * General NLP tasks such as text classification, sentiment analysis, and transliteration.
The dataset's regional coverage is global. Its linguistic scope is focused on Tamil-English mixed text, specifically "Tanglish." The data originates from comments and transcripts collected from 1,260 videos. Specific notes on data availability for certain groups or years are not detailed beyond the general collection from video comments.
CCO
This dataset is particularly useful for: * The Natural Language Processing (NLP) community. * Researchers and developers working on regional and multilingual languages. * Individuals or teams focused on building and fine-tuning language models for Tanglish. * Those developing solutions for machine translation and transliteration tasks involving Tamil-English content. * Linguists interested in code-switching phenomena and low-resource language studies.
Original Data Source: Tamil and Tanglish Transliterated Words Dataset
How frequently a word occurs in a language is an important piece of information for natural language processing and linguists. In natural language processing, very frequent words tend to be less informative than less frequent one and are often removed during preprocessing.
This dataset contains frequency information on Korean, which is spoken by 80 million people. For each item, both the frequency (number of times it occurs in the corpus) and its relative rank to other lemmas is provided.
This dataset contains six sub-files with frequency information. The files have been renamed in English, but no changes have been made to the file contents. The files and their headers are listed below. The text in this dataset is UTF-8.
This dataset was collected and made available by the National Institute of Korean Language. The dataset and additional documentation (in Korean) can be found here.
This dataset is distributed under a Korean Open Government Liscence, type 4. It may be redistributed with attribution, without derivatives and not for commercial purposes.
No description was included in this Dataset collected from the OSF
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The COVID-19 infodemic, characterized by the rapid spread of misinformation and unverified claims related to the pandemic, presents a significant challenge. This paper presents a comparative analysis of the COVID-19 infodemic in the English and Chinese languages, utilizing textual data extracted from social media platforms. To ensure a balanced representation, two infodemic datasets were created by augmenting previously collected social media textual data. Through word frequency analysis, the 30 most frequently occurring infodemic words are identified, shedding light on prevalent discussions surrounding the infodemic. Moreover, topic clustering analysis uncovers thematic structures and provides a deeper understanding of primary topics within each language context. Additionally, sentiment analysis enables comprehension of the emotional tone associated with COVID-19 information on social media platforms in English and Chinese. This research contributes to a better understanding of the COVID-19 infodemic phenomenon and can guide the development of strategies to combat misinformation during public health crises across different languages.
CELEX database comprises three different searchable lexical databases, Dutch, English and German. The lexical data contained in each database is divided into five categories: orthography, phonology, morphology, syntax (word class) and word frequency.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
*estimate.The table shows the spoken frequency counts of numbers 1–7 as they occur prenominally (e.g., “six hats”). The counts are taken from the 385 million word Corpus of Contemporary English (COCA) [53] and the 100 million word Corpus Del Español (CORDES) [54], respectively. (Note: The English-Spanish comparison is slightly complicated because “uno” is gendered in Spanish: it takes the form “una” with some nouns, and “una” is not used exclusively as a number word. The figure for “uno” presented here is a weighted estimate: number-word+noun sequences : tokens of each number word in the corpus.).
https://borealisdata.ca/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.5683/SP2/XGW4WYhttps://borealisdata.ca/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.5683/SP2/XGW4WY
Introduction This corpus contains ASCII versions of the CELEX lexical databases of English (Version 2.5), Dutch (Version 3.1) and German (Version 2.0). CELEX was developed as a joint enterprise of the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. Pre-mastering and production was done by the LDC. For each language, this data set contains detailed information on: orthography (variations in spelling, hyphenation) phonology (phonetic transcriptions, variations in pronunciation, syllable structure, primary stress) morphology (derivational and compositional structure, inflectional paradigms) syntax (word class, word class-specific subcategorizations, argument structures) word frequency (summed word and lemma counts, based on recent and representative text corpora) The databases have not been tailored to fit any particular database management program. Instead, the information is in ASCII files in a UNIX directory tree that can be queried with tools, such as AWK or ICON. Unique identity numbers allow the linking of information from different files. Some kinds of information have to be computed online; wherever necessary, AWK functions have been provided to recover this information. README files specify the details of their use. A detailed User Guide describing the various kinds of lexical information available is supplied. All sections of this guide are POSTSCRIPT files, except for some additional notes on the German lexicon in plain ASCII. CELEX-2 The second release of CELEX contains an enhanced, expanded version of the German lexical database (2.5), featuring approximately 1,000 new lemma entries, revised morphological parses, verb argument structures, inflectional paradigm codes and a corpus type lexicon. A complete PostScript version of the Germanic Linguistic Guide is also included, in both European A-4 format and American Letter format. For German, the total number of lemmas included is now 51,728, while all their inflected forms number 365,530. Moreover, phonetic syllable frequencies have been added for (British) English and Dutch. Apart from this, and provision of frequency information alongside every lexical feature, no changes have been made to Dutch and English lexicons. Complete AWK-scripts are now provided to compute representations not found in the (plain ASCII) lexical data files, corresponding to the features described in CELEX User Guide, which is included as well. For each language, i.e. English, German and Dutch, the data contains detailed information on the orthography (variations in spelling, hyphenation), the phonology (phonetic transcriptions, variations in pronunciation, syllable structure, primary stress), the morphology (derivational and compositional structure, inflectional paradigms), the syntax (word class, word-class specific subcategorisation, argument structures) and word frequency (summed word and lemma counts, based on resent and representative text corpora) of both wordforms and lemmas. Unique identity numbers allow the linking of information from different files with the aid of an efficient, index-based C-program. Like its predecessor, this release is mastered using the ISO 9660 daa format, with the Rock Ridge extensions, allowing it to be used in VMS, MS-DOS, Macintosh and UNIX environments. As the new release does not omit any data from the first edition, the current release will replace the old one. Updates Petra Stiener has developed a number of scripts to modify and update CELEX2 to a modern format. They are available on her github page. LREC papers related to these updates are accessible at the following urls: http://aclweb.org/anthology/W17-7619 & http://www.lrec-conf.org/proceedings/lrec2016/summaries/761.html.
The tvarchive dataset contains word-frequency and other non-consumptive-use data about 1,205,844 English-language transcriptions of U.S. television news broadcasts. The documents were scraped from the Internet Archive's TV News Archive, which includes automatic captions of select U.S. news broadcasts since 2009. While the complete TV News Archive contains over 2.2 million transcripts, WE1S researchers were only able to collect about 1.2 million documents containing complete transcripts. The full TV News Archive includes transcripts from 33 networks and hundreds of shows. Unlike other WE1S datasets, the tvarchive dataset was not collected using keyword searches for specific terms (i.e., documents containing the word "humanities"). (See WE1S Research Materials Overview for the relation between the project's "datasets" and "collections.") WE1S makes available word frequency data only "non-consumptive use". This dataset cannot be used to access, read, or reconstruct the original texts.The data has been archived in jsonl format (each json document is delimited by a line break).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Correlation matrix over all languages for mean frequency of word groups (z-transformed) from 1970 until 2019.
Our Spanish language datasets are carefully compiled and annotated by language and linguistic experts; you can find them available for licensing:
Key Features (approximate numbers):
Our Spanish monolingual reliably offers clear definitions and examples, a large volume of headwords, and comprehensive coverage of the Spanish language.
The bilingual data provides translations in both directions, from English to Spanish and from Spanish to English. It is annually reviewed and updated by our in-house team of language experts. Offers significant coverage of the language, providing a large volume of translated words of excellent quality.
Spanish sentences retrieved from the corpus are ideal for NLP model training, presenting approximately 20 million words. The sentences provide a great coverage of Spanish-speaking countries and are accordingly tagged to a particular country or dialect.
This Spanish language dataset offers a rich collection of synonyms and antonyms, accompanied by detailed definitions and part-of-speech (POS) annotations, making it a comprehensive resource for building linguistically aware AI systems and language technologies.
Curated word-level audio data for the Spanish language, which covers all varieties of world Spanish, providing rich dialectal diversity in the Spanish language.
This language data contains a carefully curated and comprehensive list of 450,000 Spanish words.
Use Cases:
We consistently work with our clients on new use cases as language technology continues to evolve. These include NLP applications, TTS, dictionary display tools, games, translation, word embedding, and word sense disambiguation (WSD).
If you have a specific use case in mind that isn't listed here, we’d be happy to explore it with you. Don’t hesitate to get in touch with us at Oxford.Languages@oup.com to start the conversation.
Pricing:
Oxford Languages offers flexible pricing based on use case and delivery format. Our datasets are licensed via term-based IP agreements and tiered pricing for API-delivered data. Whether you’re integrating into a product, training an LLM, or building custom NLP solutions, we tailor licensing to your specific needs.
Contact our team or email us at Oxford.Languages@oup.com to explore pricing options and discover how our language data can support your goals.
This is the data and code from a word-monitoring task, in which participants responded to the word 'to' in verb + to-infinitive structures (V-to-Vinf) in English, where 'to' could occur in a full or reduced pronunciation. Accuracy and response times were analysed with mixed-effects generalized additive models (GAMM); the code also includes visualisations of these models. The paper is accepted for publication in Cognitive Linguistics. The experiment was run with OpenSesame (version 3.0.7 for Mac, cf. Mathôt et al. 2012). The data include information on frequencies of occurrence of words and bigrams; this was extracted from the Corpus of Contemporary American English (COCA, Davies 2008–). We used R (R Core Team 2017) for all data analyses, hence the code can best be replicated in R. Abstract: Frequently used linguistic structures become entrenched in memory; this is often assumed to make their consecutive parts more predictable, as well as fuse them into a single unit (chunking). High frequency moreover leads to a propensity for phonetic reduction. We present a word recognition experiment which tests how frequency information (string frequency, transitional probability) interacts with reduction in speech perception. Detection of the element to is tested in V-to-Vinf sequences in English (e.g. need to Vinf), where to can undergo reduction (“needa”). Results show that reduction impedes recognition, but this can be mitigated by the predictability of the item. Recognition generally benefits from surface frequency, while a modest chunking effect is found in delayed responses to reduced forms of high-frequency items. Transitional probability shows a facilitating effect on reduced but not on full forms. Reduced forms also pose more difficulty when the phonological context obscures the onset of to. We conclude that listeners draw on frequency information in a predictive manner to cope with reduction. High-frequency structures are not inevitably perceived as chunks, but depend on cues in the phonetic form – reduction leads to perceptual prominence of the whole over the parts and thus promotes a holistic access.
https://choosealicense.com/licenses/unlicense/https://choosealicense.com/licenses/unlicense/
English Valid Words
This repository contains CSV files with valid English words along with their frequency, stem, and stem valid probability. Dataset Github link: https://github.com/Maximax67/English-Valid-Words
Files included
valid_words_sorted_alphabetically.csv:
N: Counter for each word entry. Word: The English word itself. Frequency count: The number of occurrences of the word in the 1-grams dataset. Stem: The stem of the word. Stem valid probability: Probability… See the full description on the dataset page: https://huggingface.co/datasets/Maximax67/English-Valid-Words.