The Common Entity Data Standards (CEDS) Domain Entity Schema (DES) provides a hierarchy of domains, entities, categories, and elements. It is intended for use primarily by people as an index to search, map, and organize elements in a logical way. [from homepage]
The NIST Extensible Resource Data Model (NERDm) is a set of schemas for encoding in JSON format metadatathat describe digital resources. The variety of digital resources it can describe includes not onlydigital data sets and collections, but also software, digital services, web sites and portals, anddigital twins. It was created to serve as the internal metadata format used by the NIST Public DataRepository and Science Portal to drive rich presentations on the web and to enable discovery; however, itwas also designed to enable programmatic access to resources and their metadata by external users.Interoperability was also a key design aim: the schemas are defined using the JSON Schema standard,metadata are encoded as JSON-LD, and their semantics are tied to community ontologies, with an emphasison DCAT and the US federal Project Open Data (POD) models. Finally, extensibility is also central to itsdesign: the schemas are composed of a central core schema and various extension schemas. New extensionsto support richer metadata concepts can be added over time without breaking existing applications.Validation is central to NERDm's extensibility model. Consuming applications should be able to choosewhich metadata extensions they care to support and ignore terms and extensions they don't support.Furthermore, they should not fail when a NERDm document leverages extensions they don't recognize, evenwhen on-the-fly validation is required. To support this flexibility, the NERDm framework allowsdocuments to declare what extensions are being used and where. We have developed an optional extensionto the standard JSON Schema validation (see ejsonschema below) to support flexible validation: while astandard JSON Schema validater can validate a NERDm document against the NERDm core schema, our extensionwill validate a NERDm document against any recognized extensions and ignore those that are notrecognized.The NERDm data model is based around the concept of resource, semantically equivalent to a schema.orgResource, and as in schema.org, there can be different types of resources, such as data sets andsoftware. A NERDm document indicates what types the resource qualifies as via the JSON-LD "@type"property. All NERDm Resources are described by metadata terms from the core NERDm schema; however,different resource types can be described by additional metadata properties (often drawing on particularNERDm extension schemas). A Resource contains Components of various types (includingDCAT-defined Distributions) that are considered part of the Resource; specifically, these can include downloadable data files, hierachical datacollecitons, links to web sites (like software repositories), software tools, or other NERDm Resources.Through the NERDm extension system, domain-specific metadata can be included at either the resource orcomponent level. The direct semantic and syntactic connections to the DCAT, POD, and schema.org schemasis intended to ensure unambiguous conversion of NERDm documents into those schemas.As of this writing, the Core NERDm schema and its framework stands at version 0.7 and is compatible withthe "draft-04" version of JSON Schema. Version 1.0 is projected to be released in 2025. In thatrelease, the NERDm schemas will be updated to the "draft2020" version of JSON Schema. Other improvementswill include stronger support for RDF and the Linked Data Platform through its support of JSON-LD.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Machine-readable metadata available from landing pages for datasets facilitate data citation by enabling easy integration with reference managers and other tools used in a data citation workflow. Embedding these metadata using the schema.org standard with the JSON-LD is emerging as the community standard. This dataset is a listing of data repositories that have implemented this approach or are in the progress of doing so.
This is the first version of this dataset and was generated via community consultation. We expect to update this dataset, as an increasing number of data repositories adopt this approach, and we hope to see this information added to registries of data repositories such as re3data and FAIRsharing.
In addition to the listing of data repositories we provide information of the schema.org properties supported by these data repositories, focussing on the required and recommended properties from the "Data Citation Roadmap for Scholarly Data Repositories".
The specifications and guidelines in this Data Management Plan will improve data consistency and availability of information. It will ensure that all levels of government and the public have access to the most up-to-date information; reduce or eliminate overlapping data requests and redundant data maintenance; ensure metadata is consistently created; and ensure that data services can be displayed by the consumer with the output of its choice.
This dataset represents a reference implementation of the Unit Manufacturing Process (UMP) information model presented in ASTM E3012, Standard Guide for Characterizing Environmental Aspects of Manufacturing Processes. A version of this schema is used in the UMP Builder, a web-based toolkit for recording and storing UMP models.
In 2007, Washington State legislators requested a trails database, but funding to complete that statewide project was not made available at the time.In 2009, the Federal Government outlined the need for trails database schema in their Data Standards Review Committee, stressing the efficiency in management decisions that a streamlined database can provide. “The collection, storage and management of trail related data are important components of everyday business activities in many federal and state land-managing agencies, trail organizations and businesses. From a management perspective, trails data must often mesh closely with other types of infrastructure, resource and facility enterprise data.” In 2014, the Washington State Office of the Chief Information Officer's (OCIO) Geospatial Program Office acquired a Nonhighway and Off-Road Vehicle Activities (NOVA) Program grant through the Washington State Recreation and Conservation Office (RCO) giving the OCIO initial funding to develop a statewide trails database based on Federal Geographic Data Committee standards. Using the same standard for all trails data will allow land managers and recreational users throughout the state to access and use the data regardless of administrative boundary. "Data standards will make it easier for trail information to be accessed and exchanged and used by more than one individual agency or group…Ease in sharing data increases the capability for enhanced and consistent mapping, inventory, monitoring, conditions assessment, maintenance, costing, budgeting, information retrieval, and summary reporting for internal and external needs.”Along with streamlining data and facilitating efficiency in management practices across agencies, the database will provide a source of trails information that is open and free to the public. Additional details about the project can be found here: https://ocio.wa.gov/initiatives/washington-state-trails-database-project
Schemas describing the core HXL hashtags and attributes. Starting with version 1.1, the standards documentation listing HXL hashtags and attributes at hxlstandard.org is generated directly from this dataset.
See the documentation on the HXL schema format , and the HXL Proxy validation service. Note that this is just a generic default schema—you can also create your own, project-specific HXL schemas.
This dataset contains data used in the investigation of Universally Unique Identifiers (UUIDs) that will enable a standards-based digital thread of product data in ISO 10303-242. Included are EXPRESS schema used for implementation and .stp files that were exported from native CAD (CATIA V5, Creo, and NX). UUIDs were assigned to CAD features during .stp export for each of the four design iterations.
Data Model Schema, Feature Attributes, Relationship Classes, Field Domains (Version 2, 2019)
This dataset consists of the XML schema for the NIST 1500-100 Election Results Reporting Common Data Format Specification Version 1.0.
The Custom Schema extension for CKAN enables administrators to extend the default dataset schema with custom metadata fields. This allows users to provide more detailed information about datasets than is possible with the standard CKAN fields. By enabling the inclusion of specific, user-defined fields, the extension aims to improve data discoverability and accuracy. Key Features: Customizable Metadata Fields: Allows administrators to add new, custom metadata fields to the dataset schema, tailored to specific needs and data types. Dataset Edit Integration: The added custom fields are seamlessly integrated into the dataset edit form, providing a user-friendly interface for data entry and management. Mono-Repo Structure: Operates from a mono-repo structure where each branch could potentially contain schemas specific to a customer's unique needs. Schema Definition: Leverages the ckanext-scheming extension for defining and managing the custom schema, providing a structured approach to schema modification. Technical Integration: The Custom Schema extension integrates with CKAN by adding new plugins and configuring the existing dataset edit page. It requires the ckanext-scheming extension to be installed and enabled. To ensure proper functionality, custom_schema must be placed before scheming_datasets in the ckan.plugins line of the CKAN configuration file. Benefits & Impact: Implementing the Custom Schema extension can significantly improve the quality and discoverability of datasets within a CKAN instance. By allowing users to add tailored metadata, organizations can capture more specific information about their data, facilitating more precise searches and analyses. Ultimately, this leads to enhanced data governance and more effective utilization of data resources.
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
This is the Environment Agency list of internally published data standards. These data standards were identified through an assessment of the IT and datasets belonging to the Environment Agency. They have been verified and published on the Environment Agency Quality Management System (QMS). Attribution statement: © Environment Agency copyright and/or database right 2016. All rights reserved.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This schema defines a metadata model specifically for dataset-graph datasets with provenance tracking capabilities. It captures essential publication metadata including creators, versioning, licensing, and distribution information. The schema is in full compliance with DCAT (Data Catalog Vocabulary) standards and provenance tracking through PROV-O ontology integration. This schema is described in detail in the HRA KG paper.
Bibliography:
The ckanext_datavic_odp_schema extension is designed for CKAN (Comprehensive Knowledge Archive Network) to implement the DataVic Open Data Platform (ODP) schema. This implies it provides specific configurations, customizations, and potentially data validation rules to ensure CKAN datasets conform to the Victorian Government's open data standards. Given the lack of a README, it's reasonable to assume the extension streamlines data publishing and management processes within the DataVic ODP's specific context. Key Features (Inferred): DataVic ODP Schema Implementation: Enforces the DataVic ODP schema requirements for dataset metadata within CKAN, ensuring compliance with government standards. Custom Metadata Fields: Introduces or customizes metadata fields within CKAN to align with the DataVic ODP schema specifications. This ensures that datasets can capture all required information. Data Validation: Includes validation rules to ensure that datasets meet the mandatory requirements of the DataVic ODP schema, enhancing data quality and consistency. User Interface Customization: Potentially customizes the data entry forms within CKAN to make it easier for users to input data in accordance with the schema requirements. Schema Versioning Support: Might offer support for different versions of the DataVic ODP schema if the schema evolves over time, allowing CKAN implementations to adapt to changes. Bulk Schema Updates: Possibly provides features to implement schema changes to existing datasets in bulk using scripts to maintain compliance when updates happen. Use Cases (Inferred) * Government Agencies in Victoria: Organizations within the Victorian Government can utilize this extension to seamlessly publish open datasets conforming to the DataVic ODP using CKAN. * Research Institutions: Research institutions that collaborate with the Victorian government can use the extension to prepare datasets which meet the schema, making data interchange much easier. Technical Integration (Inferred): The extension likely integrates with CKAN through plugins that modify the dataset schema, validation rules, and user interface. It probably hooks into CKAN's existing extension points to add new fields, validation functions, and potentially custom templates for displaying metadata. Customizations may be done within CKAN's configuration files. Benefits & Impact (Inferred): By implementing the ckanext_datavic_odp_schema extension, organizations can significantly simplify the process of publishing open data that complies with Victorian government standards. This improves data discoverability and reusability, it fosters greater transparency and facilitates collaboration, and ensures that datasets conform to the required specifications.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
This file provides a metadata mapping between the Government of Canada’s Open Data Metadata Element Set and Canadian Provincial and Territorial Open Data Metadata Element Sets, where applicable. This was completed as part of a commitment made in the Government of Canada’s 4th National Action Plan, 10.6 Implement a pilot project to move toward cross-jurisdictional common data standards in line with the International Open Data Charter and other international standards – A Cross-jurisdictional metadata mapping is completed with a common set of core elements. Metadata elements were collected from open data portal throughout Canada, and this metadata mapping was completed in collaboration from the contributing provinces and territories.
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
This is the controlled list of the 3 Environment Agency Operational Hubs and is the standard list for re-use across the Agency. An operational hub is a grouping of geographically linked Areas, accountable through a Director of Operations. Attribution statement: (c) Environment Agency copyright and/or database right 2016. All rights reserved.
The 'Standard' extension for CKAN appears to be designed to enforce or promote adherence to specific standards within a CKAN instance. Given the lack of a README, its precise functionality is unclear. However, based on common uses of such extensions, it likely provides tools or validation mechanisms to ensure data and metadata conform to predefined rules, potentially improving data quality and interoperability. Its goal likely assists organizations in maintaining consistent data management practices. Key Features (Assumed, based on the name 'standard'): * Metadata Validation: Enforce schema validation rules on metadata entries, ensuring datasets adhere to a specific metadata representation standard. * Data Quality Checks: Implement routines to check the structural consistency and format conformance of datasets during upload or ingestion. * Standardized Vocabulary: Provide a controlled vocabulary or taxonomy that promotes consistent categorization and tagging of datasets. * Access Control Policies: Define standard access control policies for resources and datasets, ensuring consistent data security and privacy measures. * Automated Compliance Reporting: Generate reports on compliance levels of datasets and resources against defined standards, highlighting areas needing improvement. * Customizable Rules: Allow administrators to customize validation rules and compliance checks to meet specific organizational or regulatory requirements. * Plugin Integration: Hooks into the CKAN workflow to interrupt uploads based on schema deviations or other preset parameters. Use Cases (Assumed): 1. Government Agencies: Ensure datasets published by different departments adhere to a common governmental data standard. 2. Research Data Repositories: Validate that submitted research data meets discipline-specific metadata standards, enhancing discoverability and usability. 3. Open Data Portals: Maintain a high level of data quality and standard adherence to ensure credible open data resources. Technical Integration (Assumed): Without a README, specific integration details are unknown. It likely uses CKAN's plugin system to add validation and standardization steps at various stages such as dataset creation, upload, or retrieval. It probably introduces new CKAN configuration options to define the standards to be enforced and how violations are reported. The new configurations likely cover logging parameters to indicate how errors are to be recorded, and to dictate how new validation rules are adopted. Benefits & Impact (Assumed): This extension is likely useful to organizations aiming for improved data quality, uniformity, and compliance with predetermined data standards and policies. It increases the reliability and usability of datasets by ensuring they adhere to expected formats and metadata structures, leading to more effective use of the information. In implementing standardized quality of data, it can result in enhanced data consistency and interoperability across the catalog.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The HED schema library for the Standardized Computer-based Organized Reporting of EEG (SCORE) can be used to add annotations for BIDS datasets. The annotations are machine readable and validated with the BIDS and HED validators.
This example is related to the following preprint: Dora Hermes, Tal Pal Attia, Sándor Beniczky, Jorge Bosch-Bayard, Arnaud Delorme, Brian Nils Lundstrom, Christine Rogers, Stefan Rampp, Seyed Yahya Shirazi, Dung Truong, Pedro Valdes-Sosa, Greg Worrell, Scott Makeig, Kay Robbins. Hierarchical Event Descriptor library schema for EEG data annotation. arXiv preprint arXiv:2310.15173. 2024 Oct 27.
This BIDS example dataset includes iEEG data from one subject that were measured during clinical photic stimulation. Intracranial EEG data were collected at Mayo Clinic Rochester, MN under IRB#: 15-006530.
The events are annotated according to the HED-SCORE schema library. Data are annotated by adding a column for annotations in the _events.tsv. The levels and annotations in this column are defined in the _events.json sidecar as HED tags.
HED: https://www.hedtags.org/ HED schema library for SCORE: https://github.com/hed-standard/hed-schema-library
Dora Hermes: hermes.dora@mayo.edu
As part of KBase’s commitment to promote open science, we offer users the ability to obtain a DOI (Digital Object Identifier) for their work, which can then be cited in an associated science publication. To further support the community-wide shift towards FAIR (Findable, Accessible, Interoperable, Reusable) data, KBase is expanding our data descriptors so that KBase DOIs have comprehensive citations for datasets, in addition to referencing publications or software used in the workflow. This helps encourage a culture of giving attribution for all research inputs and outputs; standard practice for literature, but still relatively new for software products or datasets. It also promotes open science by building trust that contributors get credit for their work, and accelerates knowledge discovery by supporting and incentivizing the release of data.
A variance is required when an application has submitted a proposed project to the Department of Permitting Services and it is determined that the construction, alteration or extension does not conform to the development standards (in the zoning ordinance) for the zone in which the subject property is located. A variance may be required in any zone and includes accessory structures as well as primary buildings or dwellings. Update Frequency : Daily
The Common Entity Data Standards (CEDS) Domain Entity Schema (DES) provides a hierarchy of domains, entities, categories, and elements. It is intended for use primarily by people as an index to search, map, and organize elements in a logical way. [from homepage]