Metadata form template for Tempe Open Data.
This template covers section 2.5 Resource Fields: Entity and Attribute Information of the Data Discovery Form cited in the Open Data DC Handbook (2022). It completes documentation elements that are required for publication. Each field column (attribute) in the dataset needs a description clarifying the contents of the column. Data originators are encouraged to enter the code values (domains) of the column to help end-users translate the contents of the column where needed, especially when lookup tables do not exist.
Data Dictionary template for Tempe Open Data.
Public Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
License information was derived automatically
This dataset contains templates of policies and MoU's on data sharing. You can download the Word-templates and adapt the documents to your national context.
Abstract: This document describes the deliverable “Data and Computing Landscape Documentation System and Interview Template”. These are tools necessary for the work of Task 4.1. The interview template gives guidance for EUDAT2020 interviewers interviewing community experts and managers. The documentation system allows EUDAT2020 to coordinate its contacts with the communities. This document also describes the rationale behind it, its status, how it relates to the EUDAT2020 (internal) workflow and other EUDAT2020 information systems and how we expect the systems to evolve.
WUR Library (data librarians and research data management support) developed the WUR documentation templates, guidance, and examples to assist researchers in documenting data. The files in the package can be used independently of whether the data is archived at WUR or published in a repository. Note that the metadata json file is a filled in example. You can fill in your metadata json using the Yoda metadata editor at https://utrechtuniversity.github.io/yoda-portal/. Please ignore the Zenodo preview and scroll down to the files below. The filled examples are partially based on the project described in the fictional data management plan (https://doi.org/10.5281/zenodo.7096699, see 'related identifiers'). See the version history txt file for indication of changes made.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Document templates with correct naming structure for use with TARPD automated documentation code.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains data and code related to experimentation on a comparative evaluation of different template notations for requirements documentation in semi-formal natural language.
https://www.koncile.ai/en/termsandconditionshttps://www.koncile.ai/en/termsandconditions
Automatically extract critical data from Key Information Documents (DIC) with Koncile's intelligent OCR. Fast structuring, usable formats (Excel, JSON).
https://pacific-data.sprep.org/resource/shared-data-license-agreementhttps://pacific-data.sprep.org/resource/shared-data-license-agreement
This dataset includes a pdf format and a Word document format of the Memorandum of Understanding for use and adaptation by Vanuatu Government for inter-agency data sharing.
This document contains a template for taking notes during meetings of the Biologist and Graph Interpretation (BioGraphI) Network’s semester-long Faculty Mentoring Network (FMN). These notes are used by FMN participants to record summaries of meeting discussions.
This data package contains three templates that can be used for creating README files and Issue Templates, written in the markdown language, that support community-led data reporting formats. We created these templates based on the results of a systematic review (see related references) that explored how groups developing data standard documentation use the Version Control platform GitHub, to collaborate on supporting documents. Based on our review of 32 GitHub repositories, we make recommendations for the content of README Files (e.g., provide a user license, indicate how users can contribute) and so 'README_template.md' includes headings for each section. The two issue templates we include ('issue_template_for_all_other_changes.md' and 'issue_template_for_documentation_change.md') can be used in a GitHub repository to help structure user-submitted issues, or can be modified to suit the needs of data standard developers. We used these templates when establishing ESS-DIVE's community space on GitHub (https://github.com/ess-dive-community) that includes documentation for community-led data reporting formats. We also include file-level metadata 'flmd.csv' that describes the contents of each file within this data package. Lastly, the temporal range that we indicate in our metadata is the time range during which we searched for data standards documented on GitHub.
Since 1963, the International Heat Flow Commission (IHFC | www.ihfc-iugg.org) has been dedicated to providing standards for heat flow measurements and maintaining the Global Heat Flow Database (GHFDB) — a collection of heat flow data from around the world. The first quality framework for heat-flow-density data was proposed by Jessop et al. (1976), reflecting the state of knowledge, measurement techniques, and technical developments at that time. In 2019, the IHFC initiated a major revision of the GHFDB to develop an authenticated and quality-assessed database. This initiative involved multinational working groups and led to a comprehensive update of key parameters affecting heat-flow calculations. These updates included measurement methods for both temperature and thermal conductivity, as well as metadata structures. The new standard for a revised GHFDB structure was developed through a collaborative community approach and published in 2021 (Fuchs et al., 2021). This standard reflected changes in database technology and scientific documentation and served as a template for users submitting data to the GHFDB. It was further developed into the currently valid data and metadata standard in 2023, which also introduced an enhanced quality evaluation framework (Fuchs et al., 2023). The ongoing assessment work and the latest release of the GHFDB (Global Heat Flow Database Assessment Group et al., 2024), along with its frequent use, revealed the need for additional refinements. These refinements were particularly necessary in aspects related to metadata consistency, measurement techniques, and classification criteria. Consequently, further updates were implemented to improve the reliability and applicability of the dataset, ensuring a more robust evaluation of global heat-flow data. Here, we present the 2025.05 version of the GHFDB Data Template. The previous template introduced by Fuchs et al. (2023) has been improved based on the latest data ass6ssment process. The current version of the template incorporates the advancements in data collection methodologies, the IHFC quality evaluation framework, and metadata management, ensuring that data submitted to the GHFDB follows the IHFC standards for the GHFDB. To promote open access, the template is also hosted on the official GitHub repository of the IHFC: https://github.com/ihfc-iugg. Users can download both the original version from 2023 and the revised template. Maintaining the GHFDB Data Template in a version-controlled environment ensures transparency regarding changes over time and fosters a documentation style that sets high standards to support the reproducibility of research results. Moreover, it supports a smooth and fast integration of data from the research community into the Global Heat Flow Database of the IHFC.
There are many useful strategies for preparing GIS data for Next Generation 9-1-1. One step of preparation is making sure that all of the required fields exist (and sometimes populated) before loading into the system. While some localities add needed fields to their local data, others use an extract, transform, and load process to transform their local data into a Next Generation 9-1-1 GIS data model, and still others may do a combination of both.There are several strategies and considerations when loading data into a Next Generation 9-1-1 GIS data model. The best place to start is using a GIS data model schema template, or an empty file with the needed data layout to which you can append your data. Here are some resources to help you out. 1) The National Emergency Number Association (NENA) has a GIS template available on the Next Generation 9-1-1 GIS Data Model Page.2) The NENA GIS Data Model template uses a WGS84 coordinate system and pre-builds many domains. The slides from the Virginia NG9-1-1 User Group meeting in May 2021 explain these elements and offer some tips and suggestions for working with them. There are also some tips on using field calculator. Click the "open" button at the top right of this screen or here to view this information.3) VGIN adapted the NENA GIS Data Model into versions for Virginia State Plane North and Virginia State Plane South, as Virginia recommends uploading in your local coordinates and having the upload tools consistently transform your data to the WGS84 (4326) parameters required by the Next Generation 9-1-1 system. These customized versions only include the Site Structure Address Point and Street Centerlines feature classes. Address Point domains are set for address number, state, and country. Street Centerline domains are set for address ranges, parity, one way, state, and country. 4) A sample extract, transform, and load (ETL) for NG9-1-1 Upload script is available here.Additional resources and recommendations on GIS related topics are available on the VGIN 9-1-1 & GIS page.
BackgroundTrauma is a significant public health issue that affects both mental and physical health. Healthcare delivery based on trauma-informed care (TIC) principles is designed to mitigate the risk of re-traumatization in healthcare settings to improve patient outcomes. Chronic pain is a common comorbidity of trauma and a common reason that people seek healthcare, including chiropractic care. The extent to which TIC training is integrated into chiropractic education and Doctor of Chiropractic Programs (DCPs) remains unclear.ObjectiveThis study aims to evaluate the presence of TIC principles in educational curricula documents from accredited DCPs across the United States and Canada to identify potential gaps in trauma-sensitive education within chiropractic training.MethodsA scoping document analysis will be conducted using educational curricula documents (program handbooks, course catalogs, and course syllabi) from DCPs accredited by the Council on Chiropractic Education (CCE-USA). Documents will be evaluated for TIC-related search terms based on established frameworks from the Substance Abuse and Mental Health Services Administration and the Harvard Medical School TIC Core Competencies. The analysis will assess the presence of TIC principles such as safety, trust, empowerment, and cultural sensitivity. A phased approach will be used for data extraction, ensuring a comprehensive review of TIC integration.ResultsThe study will quantify the inclusion of TIC principles in chiropractic education in the United States and Canada and identify trends or gaps related to TIC education.ConclusionOur findings can inform future curriculum review and development, ensuring DCPs integrate TIC effectively to enhance care for trauma-exposed patients.
Operational Analysis is a method of examining the current and historical performance of the operations and maintenance investments and measuring that performance against an established set of cost, schedule, and performance parameters. The Operational Analysis template is used as a guide in preparing and documenting SSA's Operational Analyses.
This workshop is a continuation of the DDI power point presentation given at the previous year's DLI Training in Kingston. It is intended as a primer for those interested in understanding the basic concepts of the Data Documentation Initiative (DDI) and the Data Type Definition (DTD) statements. This time participants will have the opportunity to take a closer look, examine the tags, determine criteria for selection and create an XML template.
Prior template for requested daily data reports on testing, capacity and utilization, and patient flows to facilitate the public health response to the 2019 Novel Coronavirus (COVID-19).
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset consists of 10000 jpg images with white backgrounds, 10000 jpg images with colored backgrounds (the same colors used in the paper) as well as 3x10000 json annotation files. The images are generated from 50 different templates. For each template, 200 images were generated. We provide annotations in three formats: our own original format, the COCO format and a format compatible with HuggingFace Transformers. Background color varies across templates but not across instances from the same template.
In terms of objects, the dataset contains 24 different classes. The classes vary considerably in their numbers of occurrences and thus, the dataset is somewhat imbalanced.
The annotations contain bounding box coordinates, bounding box text and object classes.
We propose two methods for training and evaluating models. The models were trained until convergence ie until the model reaches optimal performance on the validation split and started overfitting. The model version used for evaluation is the one with the best validation performance.
First Evaluation strategy:
For each template, the generated images are randomly split into 3 subsets: training, validation and testing.
In this scenario, the model trains on all templates and is thus tested on new images rather than new layouts.
Second Evaluation strategy:
The real templates are randomly split into a training set, and a common set of templates for validation and testing. All the variants created from the training templates are used as training dataset. The same is done to form the validation and testing datasets. The validation and testing sets are made up of the same templates but of different images.
This approach tests the models' performance on different unseen templates/layouts, rather than the same templates with different content.
We provide the data splits we used for every evaluation scenario. We also provide the background colors we used as augmentation for each template.
This is the reporting template for SDG indicator 1.4.2 which UN-Habitat sends to countries on an annual basis to submit the most recent data at the city and national levels. Please click on the [DOWNLOAD] button to get the .xlsx template.Last updated: November 2024
Metadata form template for Tempe Open Data.