Facebook
Twitterhttps://bioregistry.io/spdx:CC0-1.0https://bioregistry.io/spdx:CC0-1.0
Wikidata is a collaboratively edited knowledge base operated by the Wikimedia Foundation. It is intended to provide a common source of certain types of data which can be used by Wikimedia projects such as Wikipedia. Wikidata functions as a document-oriented database, centred on individual items. Items represent topics, for which basic information is stored that identifies each topic.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains 417,937 biographical records of notable individuals, extracted from Wikidata using SPARQL queries via the Wikidata Query Service.
country_of_birth. image_url is mandatory). occupation_groups.csv (Science & Academia, Arts & Culture, Public Figures, Sports, Business). | Column | Description | Notes |
|---|---|---|
wikidata_url | Unique Wikidata URL identifier for the entry | Mandatory |
label | Primary name/label of the person (usually in English) | Mandatory |
name_in_native_languages | Name(s) in the person’s native language(s) | ;-separated values |
pseudonyms | Alternative names or aliases used by the person | ;-separated values |
sex_or_gender | Gender information | Mandatory |
date_of_birth | Birth date | Mandatory |
place_of_birth | City or region of birth | |
country_of_birth | Country of birth | Mandatory |
date_of_death | Death date (if applicable) | |
place_of_death | City or region of death (if applicable) | |
country_of_death | Country of death (if applicable) | |
citizenships | Nationalities or citizenships held | ;-separated values |
occupations | Specific occupations or roles | Mandatory, ;-separated |
occupation_groups | Broad occupational categories | Mandatory, ;-separated |
awards | Awards, honors, or recognitions received | ;-separated values |
signature_url | URL to an image of the person’s signature | |
image_url | URL to the person's image/portrait | Mandatory |
date_of_image | Date when the image was created (if available) |
The data may contain some number of inaccuracies, due to inconsistencies or errors in the original Wikidata entries. This can sometimes be seen in date fields, especially date_of_image.
image_url and signature_url are hosted on Wikimedia Commons and may have individual licenses (e.g., CC BY-SA, Public Domain). Please check the license terms on the source page before using.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a dump from Wikidata from 2018-12-17 in JSON. This one is not avavailable anymore from Wikidata. It was downloaded originally from https://dumps.wikimedia.org/other/wikidata/20181217.json.gz and recompressed to fit on Zenodo.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset provides entity mappings between Freebase and Wikidata, enabling seamless integration between two large-scale knowledge graphs. It is based on the Wikidata data dump from October 28, 2013, and was originally published by Google under the CC0 (Public Domain) license.
The mappings are carefully filtered to ensure high reliability:
This strict filtering results in high-confidence entity alignments, making the dataset useful for research and real-world applications in knowledge graph systems.
Facebook
TwitterWith this feature the user is able to extend CSV datasets with existing information in the Wikidata KG. The tool applies entity linking to all concepts in the same column and enable the user to use the extracted entities to extend the dataset.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset SummaryThe Triple-to-Text Alignment dataset aligns Knowledge Graph (KG) triples from Wikidata with diverse, real-world textual sources extracted from the web. Unlike previous datasets that rely primarily on Wikipedia text, this dataset provides a broader range of writing styles, tones, and structures by leveraging Wikidata references from various sources such as news articles, government reports, and scientific literature. Large language models (LLMs) were used to extract and validate text spans corresponding to KG triples, ensuring high-quality alignments. The dataset can be used for training and evaluating relation extraction (RE) and knowledge graph construction systems.Data FieldsEach row in the dataset consists of the following fields:subject (str): The subject entity of the knowledge graph triple.rel (str): The relation that connects the subject and object.object (str): The object entity of the knowledge graph triple.text (str): A natural language sentence that entails the given triple.validation (str): LLM-based validation results, including:Fluent Sentence(s): TRUE/FALSESubject mentioned in Text: TRUE/FALSERelation mentioned in Text: TRUE/FALSEObject mentioned in Text: TRUE/FALSEFact Entailed By Text: TRUE/FALSEFinal Answer: TRUE/FALSEreference_url (str): URL of the web source from which the text was extracted.subj_qid (str): Wikidata QID for the subject entity.rel_id (str): Wikidata Property ID for the relation.obj_qid (str): Wikidata QID for the object entity.Dataset CreationThe dataset was created through the following process:1. Triple-Reference Sampling and ExtractionAll relations from Wikidata were extracted using SPARQL queries.A sample of KG triples with associated reference URLs was collected for each relation.2. Domain Analysis and Web ScrapingURLs were grouped by domain, and sampled pages were analyzed to determine their primary language.English-language web pages were scraped and processed to extract plaintext content.3. LLM-Based Text Span Selection and ValidationLLMs were used to identify text spans from web content that correspond to KG triples.A Chain-of-Thought (CoT) prompting method was applied to validate whether the extracted text entailed the triple.The validation process included checking for fluency, subject mention, relation mention, object mention, and final entailment.4. Final Dataset Statistics12.5K Wikidata relations were analyzed, leading to 3.3M triple-reference pairs.After filtering for English content, 458K triple-web content pairs were processed with LLMs.80.5K validated triple-text alignments were included in the final dataset.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
mapping between freebase and wikidata entities
This dataset maps freebase ids to wikidata ids and labels. It is useful for visualising and better understanding when working with datasets like fb15k-237 How it was created:
Download freebase-wikidata mapping from here. [compressed size: 21.2 MB] Download wikidata entities data from here. [compressed size: 81GB] Align labels with the freebase,wikidata id
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Persons of interest profiles from Wikidata, the structured data version of Wikipedia.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the so-called "truthy" dump of Wikidata from on or about May 21, 2022, shared for usage in SemTab 2022 Challenge.
Downloaded from https://www.wikidata.org/wiki/Wikidata:Database_download
See the License section of the above page for license information.
THIS DATA IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a collection of Gender Indicators from Wikidata and Wikipedia of Human Biographies. Data is derived from the 2016-01-03 Wikidata snapshot.Each file describe the humans in Wikidata aggregated by Gender (Property:P21), and dissaggregated by the following Wikidata Properties: - Date of Birth (P569)- Date of Death (P570)- Place of Birth (P19)- Country of Citizenship (P27)- Ethnic Group (P172)- Field of Work (P101)- Occupation (P106)- Wikipedia Language ("Sitelinks") Further aggregations of the data are: - World Map (Countries derived from place of birth and citizenship)- World Cultures (Inglehart Welzel Map applied to World Map)- Gender Co-Occurence (Humans with multiple genders).Wikidata labels have be translated to English for convenience when possible. You may still see values with "QIDs" which means there was no English translation possible. In the case where there were multiple values, such as for occupation, the we count the gender as co-occuring with each occupation separately.For more information. http://wigi.wmflabs.org/
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Category-based imports from Wikidata, the structured data version of Wikipedia.
Facebook
Twitterhttps://choosealicense.com/licenses/cc0-1.0/https://choosealicense.com/licenses/cc0-1.0/
Wikidata Extraction
This dataset contains all RDF triples extracted from the latest Wikidata, converted from the N-Triples format to Parquet. The data originates from Wikidata, a free and open knowledge base that acts as central storage for structured data used by Wikipedia and other Wikimedia projects. The source file is the "truthy" N-Triples dump (latest-truthy.nt.bz2), which contains only the current, non-deprecated statements. The code to extract this data is available at… See the full description on the dataset page: https://huggingface.co/datasets/piebro/wikidata-extraction.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
RDF dump of wikidata produced with wdumps.
<p>
<br>
<a href="https://tools.wmflabs.org/wdumps/dump/317">View on wdumper</a>
</p>
<p>
<b>entity count</b>: 48701, <b>statement count</b>: 868911, <b>triple count</b>: 3044412
</p>
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset, "Global Companies from Wikidata", is a curated collection of 3,579 global companies with information sourced from Wikidata.
Key Features of the Dataset:
Dataset Columns:
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Wikidata is a free and open knowledge base that can be read and edited by both humans and machines. Wikidata acts as central storage for the structured data of its Wikimedia sister projects including Wikipedia, Wikivoyage, Wiktionary, Wikisource, and others.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Profiles of politically exposed persons from Wikidata, the structured data version of Wikipedia.
Facebook
TwitterAttribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
License information was derived automatically
Regularly published dataset of PageRank scores for Wikidata entities. The underlying link graph is formed by a union of all links accross all Wikipedia language editions. Computation is performed Andreas Thalhammer with 'danker' available at https://github.com/athalhammer/danker . If you find the downloads here useful please feel free to leave a GitHub ⭐ at the repository and buy me a ☕ https://www.buymeacoffee.com/thalhamm
Facebook
Twitterhttps://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
Database dump of the Wikidata wiki. Wikipedia Multistream 2026-01-01.
Facebook
TwitterRDF dump of wikidata produced with wdumps. Mappings to Google Knowledge Graph, both Freebase and new. View on wdumper entity count: 0, statement count: 0, triple count: 0
Facebook
Twitterhttps://bioregistry.io/spdx:CC0-1.0https://bioregistry.io/spdx:CC0-1.0
Wikidata is a collaboratively edited knowledge base operated by the Wikimedia Foundation. It is intended to provide a common source of certain types of data which can be used by Wikimedia projects such as Wikipedia. Wikidata functions as a document-oriented database, centred on individual items. Items represent topics, for which basic information is stored that identifies each topic.