The synthetic dataset that models COVID-19 real world observations from WHO COVID-19 RAPID Version CRFs of hospitalized patients for the hypothesis under study, originally created by the TWOC project.
Graffiti is an urban phenomenon that is increasingly attracting the interest of the sciences. To the best of our knowledge, no suitable data corpora are available for systematic research until now. The Information System Graffiti in Germany project (Ingrid) closes this gap by dealing with graffiti image collections that have been made available to the project for public use. Within Ingrid, the graffiti images are collected, digitized and annotated. With this work, we aim to support the rapid access to a comprehensive data source on Ingrid targeted especially by researchers. In particular, we present IngridKG, an RDF knowledge graph of annotated graffiti, abides by the Linked Data and FAIR principles. We weekly update IngridKG by augmenting the new annotated graffiti to our knowledge graph. Our generation pipeline applies RDF data conversion, link discovery and data fusion approaches to the original data. The current version of IngridKG contains 460,640,154 triples and is linked to 3 other knowledge graphs by over 200,000 links. In our use case studies, we demonstrate the usefulness of our knowledge graph for different applications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Citations are the bridges that enable people to pass from one scholarly work (e.g. a conference paper) to others (e.g. journal articles and book chapters). At present, the unrestricted travel over the entire network of bridges by using existing services requires one to pay an expensive fee, which is affordable only by rich professionals – such universities or other research institutes. The general populace is excluded. In this paper, we introduce the OpenCitations Corpus, an open repository of scholarly citation data available in RDF and published according to the FAIR principles, which is an attempt to provide open bridges between scholarly works.
The collection of Fotopersbureau De Boer is particularly notable for its abundance of subjects and size. It contains valuable material for research on current topics such as environment, energy, and social inequality, and provides a glimpse into the everyday lives of people. The 'FAIR Photos' project has enriched the metadata of the collection by linking them to thesauri of locations, persons, and keywords in order to further open up the collection for use in research and the cultural heritage sector. This newly added information is reintegrated into the archive's collection management system to ensure long-term storage sustainability. This data deposition contains the enriched metadata in both CSV and RDF formats. For more information: Noord-Hollands Archief: https://noord-hollandsarchief.nl/ Fotopersbureau De Boer: https://noord-hollandsarchief.nl/collecties/beeld/collectie-fotopersbureau-de-boer/ GitHub repository with more information on the reconciliation process: https://github.com/noord-hollandsarchief/CLARIAH-Data-Call-Fotopersbureau-De-Boer CLARIAH FAIR Data Call 2023: https://www.clariah.nl/clariah-fair-data-call-2023 Or use the contact option in the data deposit. Data The data are available in CSV and RDF Turtle format. Due to their abundance, the HisVis AI tags are only available in the RDF. Please note that at the moment of deposit, the URIs prefixed with https://data.noord-hollandsarchief.nl/ are not resolving and it is unclear if this will be made possible in the future. The UUIDs are however stable and should be able to be used to retrieve the resources in the future. Almost all resources are modelled in the schema.org vocabulary (https://schema.org), with exception of person observations and reconstructions (cf. roar vocabulary, https://w3id.org/roar), and person names (cf. pnv vocabulary, https://w3id.org/pnv). To indicate the certainty of an AI tag, the rico:certainty property from the rico ontology (https://www.ica.org/standards/RiC/ontology) is used. Where the schema.org vocabulary is not specific enough, other vocabularies are used to attribute a concept from the AAT (https://www.getty.edu/research/tools/vocabularies/aat/) or GTAA (https://www.beeldengeluid.nl/kennis/kennisthemas/metadata/gemeenschappelijke-thesaurus-audiovisuele-archieven) is attributed using the schema.org/additionalType property. Only person data (observation and reconstruction) from 'public persons' are included. It may lead to URIs in the report data (via a schema:about property) that do not carry any other information. These can be filled in later (once the person is set to 'public', e.g. through disambiguation). See the example below for an example of the RDF description of a single photograph and related resources. CSV The CSV files are generated from the RDF with the SPARQL queries in the queries folder. photographs.csv - Each line is an individual photograph with its attributes and relations to other entities (e.g. reports, persons, locations) through one or more uuids. Information is given on: The permalink of the photogrpah (the HANDLE given by the Noord-Hollands Archief) Its identifier A URL to a thumbnail (cf. IIIF Image API) UUID of the report the photo is part of Name of the report the photo is part of Date of the report the photo is part of Any person observations made in this report. UUID and name are included in separate columns. Multiple entries are separated by a semicolon and a space (;). Any locations mentioned in this report. UUID and name are included in separate columns. Multiple entries are separated by a semicolon and a space (;). Any concepts that classify this report. URI and name are included in separate columns. Multiple entries are separated by a semicolon and a space (;). personobservations.csv - Each line is an individual person observation with its attributes. For name attributes, the property names from the PNV vocabulary (https://w3id.org/pnv) are used. Only information on 'public persons' is included. Information is given on: UUID of the person observation Label of the person observation entry (e.g. "Jansen, Jan") Name of the person observation entry (e.g. "Jan Jansen") Prefix of the person's name Initials of the person's name Given name of the person's name Infix title of the person's name Surname prefix of the person's name Base surname of the person's name Patronym of the person's name Disambiguating description of the person's name personreconstructions.csv - Each line is an individual person reconstruction with its attributes. For name attributes, the property names from the PNV vocabulary (https://w3id.org/pnv) are used. Only information on 'public persons' is included. Information is given on: UUID of the person reconstruction Name of the person reconstruction entry (e.g. "Jan Jansen") Prefix of the person's name Initials of the person's name Given name of the person's name Infix title of the person's name Surname prefix of the person's name Base surname of the person's name Patronym of the person's name Disambiguating description of the person's name UUID(s) and names of the person observation(s) that lead to this reconstruction. Multiple entries are separated by a semicolon and a space (;). A URI to the Wikidata entry of the person A URI to the GTAA entry of the person locations.csv - Each line is an individual location with its attributes. Information is given on: UUID of the location Name of the location The location type (URI and label), as defined by the AAT The location geometry in WKT format A URI to the Wikidata entry of the location A URI to the GTAA entry of the location concepts.csv - Each line is an individual subject card concept with its attributes. Information is given on: URI of the concept Preferred label of the concept in Dutch and English (if available) Alternative labels of the concept in Dutch and English (if available) URI and preferred label in Dutch and English of the broader concept Any related concepts (URI). Multiple entries are separated by a semicolon and a space (;). Any close matches (URI). For instance a concept from the HisVis AI tags concept scheme. Multiple entries are separated by a semicolon and a space (;). Any exact matches (URI). Multiple entries are separated by a semicolon and a space (;). tags.csv - Each line is an individual HisVis AI tag concept with its attributes. Information is given on: URI of the concept Preferred label of the concept in Dutch and English (if available) Alternative labels of the concept in Dutch and English (if available) URI and preferred label in Dutch and English of the broader concept Any related concepts (URI). Multiple entries are separated by a semicolon and a space (;). Any close matches (URI). For instance a concept from the subject card concept scheme. Multiple entries are separated by a semicolon and a space (;). Any exact matches (URI). Multiple entries are separated by a semicolon and a space (;). Definitions of the concept in Dutch and English (if available) RDF An example of the description of a photograph (https://hdl.handle.net/21.12102/a55d7380-e5b6-aaee-94ae-8f03096b77a4) is given below: @prefix aat: http://vocab.getty.edu/aat/ . @prefix dcterms: http://purl.org/dc/terms/ . @prefix foaf: http://xmlns.com/foaf/0.1/ . @prefix geo: http://www.opengis.net/ont/geosparql# . @prefix gtaa: http://data.beeldengeluid.nl/gtaa/ . @prefix owl: http://www.w3.org/2002/07/owl# . @prefix pnv: https://w3id.org/pnv# . @prefix prov: http://www.w3.org/ns/prov# . @prefix rdfs: http://www.w3.org/2000/01/rdf-schema# . @prefix rico: https://www.ica.org/standards/RiC/ontology# . @prefix roar: https://w3id.org/roar# . @prefix schema: https://schema.org/ . @prefix skos: http://www.w3.org/2004/02/skos/core# . @prefix wd: http://www.wikidata.org/entity/ . @prefix xsd: http://www.w3.org/2001/XMLSchema# . https://hdl.handle.net/21.12102/a55d7380-e5b6-aaee-94ae-8f03096b77a4 a schema:Photograph ; schema:about [ a schema:Role ; schema:about https://digitaalerfgoed.poolparty.biz/nha/a2c47bdc-ae1f-8dd4-8e2d-3df8a28a03c6 ; rico:certainty "0.45"^^xsd:float ], [ a schema:Role ; schema:about https://digitaalerfgoed.poolparty.biz/nha/f0881540-bb3e-3755-dbac-735be0b40890 ; rico:certainty "1.0"^^xsd:float ], [ a schema:Role ; schema:about https://digitaalerfgoed.poolparty.biz/nha/c7d7de0a-1aad-64b8-4208-0d188cacef9d ; rico:certainty "0.4"^^xsd:float ] ; schema:identifier "NL-HlmNHA_1478_13925B00_01" ; schema:image https://maior-images.memorix.nl/ranh/iiif/9bc8e0c5-b530-2354-a20a-92ea12cf2f85 ; schema:isPartOf https://data.noord-hollandsarchief.nl/collection/FotopersbureauDeBoer/report/c9a72d42-5843-b781-264c-632a381f9a1f . https://maior-images.memorix.nl/ranh/iiif/9bc8e0c5-b530-2354-a20a-92ea12cf2f85 a schema:ImageObject ; schema:contentUrl https://maior-images.memorix.nl/ranh/iiif/9bc8e0c5-b530-2354-a20a-92ea12cf2f85/full/max/0/default.jpg ; schema:thumbnailUrl https://maior-images.memorix.nl/ranh/iiif/9bc8e0c5-b530-2354-a20a-92ea12cf2f85/full/,250/0/default.jpg . https://data.noord-hollandsarchief.nl/collection/FotopersbureauDeBoer/report/c9a72d42-5843-b781-264c-632a381f9a1f a schema:CreativeWork ; schema:about https://data.noord-hollandsarchief.nl/collection/FotopersbureauDeBoer/location/0d7a7fe6-acd6-594b-b552-27e28fb46db1, https://data.noord-hollandsarchief.nl/collection/FotopersbureauDeBoer/person/observation/fa906ea0-8d10-11ee-9aac-ac1f6ba5b082, https://digitaalerfgoed.poolparty.biz/nha/a9208ebc-d5f7-3fe6-14c1-05a880ba0690, https://digitaalerfgoed.poolparty.biz/nha/acd4a20a-ec61-bc6e-01f0-412a54c37524 ; schema:additionalType gtaa:30294 ; schema:dateCreated "1957-03-11"^^xsd:date ; schema:identifier "18948" ; schema:isPartOf https://data.noord-hollandsarchief.nl/collection/FotopersbureauDeBoer/serie/vlakfilms ; schema:name "Romanschrijver Harry Mulisch op het dak van de Sint-Bavokerk in Haarlem" .
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Adverse Outcome Pathways (AOPs) have been proposed to facilitate mechanistic understanding of interactions of chemicals/materials with biological systems. Each AOP starts with a molecular initiating event (MIE) and possibly ends with adverse outcome(s) (AOs) via a series of key events (KEs). So far, the interaction of engineered nanomaterials (ENMs) with biomolecules, biomembranes, cells, and biological structures, in general, is not yet fully elucidated. There is also a huge lack of information on which AOPs are ENMs-relevant or -specific, despite numerous published data on toxicological endpoints they trigger, such as oxidative stress and inflammation. We propose to integrate related data and knowledge recently collected. Our approach combines the annotation of nanomaterials and their MIEs with ontology annotation to demonstrate how we can then query AOPs and biological pathway information for these materials. We conclude that a FAIR (Findable, Accessible, Interoperable, Reusable) representation of the ENM-MIE knowledge simplifies integration with other knowledge.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
In 2018 the IPERION-CH Grounds Database was presented to examine how the data produced through the scientific examination of historic painting preparation or grounds samples, from multiple institutions could be combined in a flexible digital form. Exploring the presentation of interrelated high resolution images, text, complex metadata and procedural documentation. The original main user interface is live, though password protected at this time. Work within the SSHOC project aimed to reformat the data to create a more FAIR data-set, so in addition to mapping it to a standard ontology, to increase Interoperability, it has also been made available in the form of open linkable data combined with a SPARQL end-point. A draft version of this live data presentation can been found Here.
This is a draft data-set and further work is planned to debug and improve its semantic structure.This deposit contains the CIDOC-CRM mapped data formatted in XML and an example model diagram representing some of the key relationships covered in the data-set.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This is 1000 lines of sample data from UniProt that I will use to demonstrate the ability of FAIR Projection to dynamically project it out as RDF triples
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional file 1. List of possible errors displayed in the SPHN Schema Forge log and their interpretations. Dataset2RDF list of errors provided to the user. To mitigate the risks of having datasets not compliant with the SPHN rules, the SPHN Schema Forge generates a report for the user. This report contains error messages produced by the Dataset2RDF tool. These error messages aim to guide the user in resolving potential issues. Additional File 1 shows the error messages that may appear in this report and their interpretations. Each error specifies the line in the Dataset where it occurred. The term 'concept' refers to the semantic element of interest defined in the Dataset, which is translated as a 'Class' in RDF. Similarly, the term 'composedOf' corresponds to attributes defined in the Dataset, which are translated as 'Properties' in RDF.
The SSHOC Reference Ontology (SSHOCro) is developed in the context of the Social Sciences and Humanities Open Cloud (SSHOC) project. The SSHOC project aims to support the discipline-specific part of the European Open Science Cloud (EOSC) focused on the Social Sciences and Humanities (SSH). The SSHOCro was developed within Task 4.7 Modelling the SSHOC Life Cycle as part of SSHOC Work Package 4 focusing on innovations in data production. The goal of the SSHOCro is to establish a common framework for organising knowledge around all steps in the research data life cycle within the SSH domain. The model - described in Resource Description Framework (RDF) schema - is event-based and aims to capture all the relevant scientific activities of the data lifecycle including the tools, datasets and services used at each phase. SSHOCro is based on the CIDOC-CRM, an ontology for information integration in the field of cultural heritage. The work on SSHOCro was coordinated by FORTH, in particular by Athina Kritsotaki, Eleni Tsouloucha, Chrysoula Bekiari and Maria Theodoridou, hereafter referred to as ���the authors��� of the SSHOCro. Various SSHOC partners and stakeholders were consulted in SSHOCro���s development, including data archives and repositories, research infrastructures and data catalogues. The work on SSHOCro started in March 2019 and a final model will be presented at the end of the SSHOC project in March 2022. The ontology will however be further developed and refined after the project.
https://dataverse.nl/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34894/FO2VHBhttps://dataverse.nl/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34894/FO2VHB
In 2007 the Raphael Research Resource project began to examine how complex conservation, scientific and art historical research could be combined in a flexible digital form. Exploring the presentation of interrelated high resolution images and text, along with how the data could be stored in relation to an event driven ontology in the form of RDF triples. The original main user interface is still live, In 2021/21 as part of the SSHOC Project the raw data stored within the system was mapped to the CIDOC CRM using a custom set of Python scripts (https://doi.org/10.5281/zenodo.6461654). The SSHOC work aimed to make this data more FAIR so in addition to mapping it to a standard ontology, to increase Interoperability, it has also been made available in the form of open linkable data combined with a SPARQL end-point. This live data presentation can been found Here. This deposit contains the CIDOC-CRM mapped data formatted in XML and an example model diagram representing some of the key relationships covered in the data-set. Live access to this data, with documentation and worked examples, can be found at: https://rdf.ng-london.org.uk/sshoc
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
NG-Tax 2.0 is a semantic framework for FAIR high-throughput analysis and classification of marker gene amplicon sequences including bacterial and archaeal 16S ribosomal RNA (rRNA), eukaryotic 18S rRNA and ribosomal intergenic transcribed spacer sequences. It can directly use single or merged reads, paired-end reads and unmerged paired-end reads from long range fragments as input to generate de novo amplicon sequence variants (ASV). Using the RDF data model, ASV’s can be automatically stored in a graph database as objects that link ASV sequences with the full data-wise and element-wise provenance, thereby achieving the level of interoperability required to utilize such data to its full potential. The graph database can be directly queried, allowing for comparative analyses of over thousands of samples and is connected with an interactive Rshiny toolbox for analysis and visualization of (meta) data. Additionally, NG-Tax 2.0 exports an extended BIOM 1.0 (JSON) file as starting point for further analyses by other means. The extended BIOM file contains new attribute types to include information about the command arguments used, the sequences of the ASVs formed, classification confidence scores and is backwards compatible. The performance of NG-Tax 2.0 was compared with DADA2, using the plugin in the QIIME 2 analysis pipeline. Fourteen 16S rRNA gene amplicon mock community samples were obtained from the literature and evaluated. Precision of NG-Tax 2.0 was significantly higher with an average of 0.95 vs 0.58 for QIIME2-DADA2 while recall was comparable with an average of 0.85 and 0.77, respectively. NG-Tax 2.0 is written in Java. The code, the ontology, a Galaxy platform implementation, the analysis toolbox, tutorials and example SPARQL queries are freely available at http://wurssb.gitlab.io/ngtax under the MIT License.
This dataset provides a collection of semantically enriched RDF graphs representing both architectural and structural aspects of a timber structure. The data is organized into modular Turtle (.ttl) files and one RIF rule definition (.txt) for reasoning purposes. Included in the dataset: Architectural Graph.ttl: Captures the spatial and material characteristics of the architectural model, defined using standard RDF vocabularies. Structural Graph.ttl: Encodes structural elements and their relationships, including support systems and load-bearing components, structured for semantic querying. Neutral Building Model (Architectural + Structural).ttl: A consolidated RDF representation integrating architectural and structural elements. All proprietary references (e.g., BHoM) have been removed to ensure vendor neutrality and interoperability. Architectural columns are linked to structural bars, and architectural floors to structural panels. The model is fully queryable using SPARQL and adheres to open-access, GDPR-compliant standards. RDF_RIF_Rule.pie.txt: A rule expressed in RDF/RIF Core syntax that demonstrates reasoning capabilities on the dataset. This dataset supports the findings of a related journal paper (currently under submission) and is complemented by a GitHub repository containing the scripts and tools used to generate the RDF data. It is intended for researchers and professionals working on Linked Building Data, semantic modeling, ontology design, and integrated architectural/structural workflows in BIM. All files are formatted using open standards (RDF, Turtle, RIF) and designed for use in FAIR-compliant, interdisciplinary design environments. A GitHub repository is linked as a secondary location for the same core dataset. It includes the main RDF graphs (architectural, structural, and integrated models) and the RIF rule file. Some additional files in the GitHub repo (Architectural Graph to link.ttl and Structural Graph to link.ttl) represent intermediate or experimental versions used in earlier stages of the project or during paper development. These contains also links that are used in the integrated model. These are not essential for reproducing the results but are provided for completeness and transparency.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset corresponds to the RDF Linked Data representation of the measurements of 61 known metabolites (all annotated with resolvable CHEBI identifiers and InChi strings), measured by gas chromatography mass-spectrometry (GC-MS) in 6 different Rose cultivars (all annotated with resolvable NCBITaxonomy Identifiers) and 3 organism parts (all annotated with resolvable Plant Ontology identifiers). The quantitation types are annotated with resolvable STATO terms. Most of the semantics resources belong to the OBO foundry.
The transformation to RDF was performed on a Frictionless Tabular Data Package (https://frictionlessdata.io/specs/tabular-data-package/), holding the data extracted from a supplementary material table, available from https://static-content.springer.com/esm/art%3A10.1038%2Fs41588-018-0110-3/MediaObjects/41588_2018_110_MOESM3_ESM.zip and published alongside the Nature Genetics manuscript identified by the following doi: https://doi.org/10.1038/s41588-018-0110-3, published in June 2018. This supplementary material table was deposited to Zenodo and is identified by the following doi: https://doi.org/10.5281/zenodo.2598799
This dataset is used to demonstrate how to make data Findable, Accessible, Discoverable and Interoperable (FAIR) and how Frictionless Tabular Data Package representations can be easily mobilised for reanalysis and data science.
It is associated to the following project: https://github.com/proccaserra/rose2018ng-notebook with all the necessary information, executable code and tutorials in the form of Jupyter notebooks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We demonstrate that semantic modeling with ontologies provides a robust and enduring approach to achieving FAIR data in our experimental environment. By endowing data with self‑describing semantics through ontological definitions and inference, we enable them to ‘speak’ for themselves. Building on PaNET, we define techniques in ESRFET by their characteristic building blocks. The outcome is a standards‑based framework (RDF, OWL, SWRL, SPARQL, SHACL) that encodes experimental techniques’ semantics and underpins a broader facility ontology. Our approach illustrates that by using differential definitions, semantic enrichment through linking to multiple ontologies, and documented semantic negotiation, we standardize experimental techniques' descriptions and annotations—ensuring enhanced discoverability, reproducibility, and integration within the FAIR data ecosystem. This talk was held in the course of the DAPHNE4NFDI TA1 Data for science lecture series on April, 29 2025
Fair trade certifications such as Max Havelaar are widely recognized for labelling food products. However, they can be found on many other types of goods to a lesser extent. In 2022, in France, ** percent of fair trade products sold in France were food products. The remaining **** percent are split between craftsmanship, cosmetics, clothing, tourism and flowers.
The fairdatapoint extension for CKAN provides a harvester specifically designed for FAIR (Findable, Accessible, Interoperable, Reusable) Data Points. It facilitates the integration of metadata from FAIR Data Points into CKAN, treating them as harvestable sources, with future enhancements potentially extending support for the FAIR Data Point API. This allows CKAN to ingest structured metadata and datasets from FAIR Data Points. Key Features: Three-Stage Harvesting Process: The extension operates through gather, fetch, and import stages, offering structured control over the harvesting workflow. FAIR Data Point Record Provider: The gather stage utilizes a FairDataPointRecordProvider which identifies catalog and dataset identifiers within the FAIR Data Point that should be harvested. In the future, collections might be included. Data Fetching: The fetch stage downloads the RDF content, which can optionally incorporate additional metadata from other sources to align with CKAN's DCAT profile requirements. Application Profile Configuration: The import stage uses application profiles to map the RDF data from the FAIR Data Point to CKAN packages and resources. Custom profiles can be defined in a Python class and registered for specific data point configurations. Command-Line Interface: The extension includes command-line utilities for running the harvester and rebuilding the search index, providing administration options. Technical Integration: The fairdatapoint extension integrates with CKAN using a flexible architecture that includes custom record providers and application profiles. The extension depends on ckanext-scheming, ckanext-harvester and ckanext-dcat. It relies on the CKAN harvester framework to manage the harvesting process with configuration through setup.py to define RDF profiles. Benefits & Impact: By providing a dedicated harvester for FAIR Data Points, this extension enhances CKAN's ability to support the FAIR data principles. This support leads to increased discoverability and reusability of data managed within FAIR Data Points, enhancing the value of CKAN as a data management platform for research and open data initiatives.
The net cash of Cedar Fair with headquarters in the United States amounted to ****** million U.S. dollars in 2023. The reported fiscal year ends on December 31.Compared to the earliest depicted value from 2019 this is a total decrease by approximately ***** million U.S. dollars. The trend from 2019 to 2023 shows, however, that this decrease did not happen continuously.
This statistic shows the outcome of a survey in which respondents were asked how often they buy fair trade products. As of 2022, Nearly ** percent of the respondents indicated that they bought fair trade products at least once a month.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the FAIRnets dataset. It contains Information about publicly available Neural Networks in RDF*. A Search API to query the dataset can be found under https://km.aifb.kit.edu/services/fairnets/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data set is a collection of environmental records associated with the individual events. The data set has been generated using the serdif-api wrapper (https://github.com/navarral/serdif-api) when sending a CSV file with example events for the Republic of Ireland. The serdif-api send a semantic query that (i) selects the environmental data sets within the region of the event, (ii) filters by the specific period of interest from the event, (iii) aggregates the data sets using the minimum, maximum, average or sum for each of the available variables for a specific time unit. The aggregation method and the time unit can be passed to the serdif-api through the Command Line Interface (CLI) (see example in https://github.com/navarral/serdif-api). The resulting data set format can be also specified as data table (CSV) or as graph (RDF) for analysis and publication as FAIR data. The open-ready data for research is retrieved as a zip file that contains: (i) data as csv: environmental data associated to particular events as a data table (ii) data as rdf: environmental data associated to particular events as a graph (iii) metadata for publication as rdf: metadata record with generalized information about the data that do not contain personal data as a graph; therefore, publishable. (iv) metadata for research as rdf: metadata records with detailed information about the data, such as individual dates, regions, data sets used and data lineage; which could lead to data privacy issues if published without approval from the Data Protection Officer (DPO) and data controller.
The synthetic dataset that models COVID-19 real world observations from WHO COVID-19 RAPID Version CRFs of hospitalized patients for the hypothesis under study, originally created by the TWOC project.