Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The datasets containing metadata in MODS for the entire BHL collection (both hosted and externally linked content) can be downloaded from the following locations:
bhlitem.mods.xml bhlitem.mods.xml.zip bhlpart.mods.xml bhlpart.mods.xml.zip bhltitle.mods.xml bhltitle.mods.xml.zip
For contextual information and key definitions about this dataset see the Biodiversity Heritage Library Open Data Collection.
Data Dictionary:https://www.loc.gov/standards/mods/v3/mods-3-8.xsd Release Date: First of the month Frequency: Monthly bureauCode: 452:11 Access Level: public Rights: http://rightsstatements.org/vocab/NoC-US/
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The datasets containing metadata in MODS for only items hosted by BHL can be downloaded from the following locations:
bhlitem.mods.xml bhlitem.mods.xml.zip bhlpart.mods.xml bhlpart.mods.xml.zip bhltitle.mods.xml bhltitle.mods.xml.zip
For contextual information and key definitions about this dataset see the Biodiversity Heritage Library Open Data Collection.
Data Dictionary: https://www.loc.gov/standards/mods/v3/mods-3-8.xsd Release Date: First of the month Frequency: Monthly bureauCode: 452:11 Access Level: public Rights: http://rightsstatements.org/vocab/NoC-US/
Facebook
Twitterhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/ILMCGShttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.7910/DVN/ILMCGS
Developers and others working with Harvard data customers can find help guides, definitions, and other key information on working with data import and export here. Still looking for what you need? Please contact iam@harvard.edu. You will be required to log in before downloading these files. For All Data Customers XML Schema Definition for Harvard People Data For Import Customers IdM Import Developers' Guide: File Names and Delivery IdM Import User's Guide: Log File Errors and Warnings IdM XML Email Import Provider's Guide IdM XML Student Directory Listing and Emergency Contact Data Provider's Guide For Export Customers Developers' Guide to Data Display and Applying Privacy IdM XML Export Data User's Guide IdM Export Developers' Guide: File Names and Privacy
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
TPB Public Register
To view XML content follow these steps:
Click ‘OK’ for any prompts that are displayed
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The published data is research data from the publication "Extending Information Delivery Specifications for digital building permit requirements" (https://doi.org/10.1016/j.dibe.2024.100560). This publication examines the potential for extending the Information Delivery Specification (IDS) schema to facilitate its integration into the building permit process.
IDS is an open specification based on XML for defining and verifying information requirements for digital building models in the IFC format (an open format for BIM models).
The present concepts for extending IDS to define information requirements for escape route analysis and code compliance checks for Austrian fire resistance regulations.
This dataset contains the results of the mentioned publication. This includes the edited IDS schema (XML Schema Definition - XSD), the created IDS files, and BCF files (BIM Collaboration Format) used for the validation. The IDS files were used as input for a self-developed IDS software to check two IFC test models. The two test models are published in related datasets:
Custom test model for escape route analysis in IFC format: https://doi.org/10.48436/hx8gz-zw339
Real-world test model for escape route analysis in IFC format: https://doi.org/10.48436/fnmrh-crh59
The checking results were saved in the BCF files. Therefore, the checking results can be visualised by applying the published BCF files to the corresponding IFC models.
The dataset contains three folders, one for each of the three data types:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The purpose of Directory (directory.gov.au) is to provide a guide to the structure, organisations and key people in the Australian Government.
A new and improved Directory.gov.au system was released in mid-2017 which consolidated the former Directory, AusGovBoards and Australian Government Organisations Register, into one system. The consolidated system contains key information about Australian Government entities, contact details of key stakeholders within organisations, and a listing of government board appointments.
This XML dataset is a full extract of the Directory content, updated daily.
Facebook
TwitterCurrent (undetermined) and most recent 100 days of determined applications at Surrey County Council, Surrey, UK. N.B. Surrey County Council does not oversee all planning applications for the county. SCC oversees applications of a specific nature, for example, related to waste, minerals, highways and schools. Data can be visualised and accessed for whole of Surrey County (all planning authorities), including API access to data, here: http://digitalservices.surreyi.gov.uk/ Data updated daily. Data formatted according to ODUG/LGA national schema, see: http://schemas.opendata.esd.org.uk/PlanningApplications. Also available as JSON, XML and CSV. To save as CSV use CSV download link and 'save page as' .CSV for correct comma separated formatting. This will provide you with a neatly formatted CSV file which can be opened in a spreadsheet package such as Excel or Libre Office Calc. PSMA End User licence: http://www.ordnancesurvey.co.uk/business-and-government/public-sector/mapping-agreements/end-user-licence.html
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Access to Data The Differentially Expressed Protein Database (DEPD) is a publicly available, web-based database. It was designed to store the output of comparative proteomics and provides a query and analysis platform for further data mining. Currently, the DEPD contains information about more than 3,000 DEPs, manually extracted from published literature, mostly from studies of serious human diseases including lung cancer, breast cancer and liver cancer. Towards establishing a data exchange standard for comparative proteomics, DEPD provide a new XML schema named CPXS 0.1 (comparative proteomics XML Schema). Additionally, a user-friendly web interface has been set up with tools for querying, visualization and analysis results of published comparative proteomics studies. All of the DEPD data can be downloaded freely from the web site (http://protchem.hunnu.edu.cn/depd/).
Facebook
TwitterThis preliminary dataset contains the application/vnd.zenodo.v1+json JSON records of Zenodo deposits as retrieved on 2019-09-16. Files zenodo-records-json-2019-09-16.tar.xz Zenodo JSON records XZ-compressed tar archive of individual JSON records as retrieved from Zenodo. Filenames reflects record, e.g. 1310621.json was retrieved from https://zenodo.org/api/records/1310621 using content-negotiation for application/vnd.zenodo.v1+json zenodo-records-json-2019-09-16-filtered.jsonseq.xz Concatinated Zenodo JSON records XZ-compressed RFC7464 JSON Sequence stream, readable by jq. Concatination of Zenodo JSON records. Order not significant. zenodo-records.sh Retrieve Zenodo JSON records A retrospectively created Bash shell script that shows the commands used to retrieve JSON files and concationate to jsonseq. ro-crate-metadata.jsonld RO-Crate 0.2 structured metadata ro-crate-preview.html Browser rendering of RO-Crate structured metadata README.md This dataset description License This dataset is provided under the license Apache License, version 2.0: Copyright 2019 The University of Manchester Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. CC0 for Zenodo metadata The Zenodo metadata in zenodo-records-json-2019-09-16.tar.xz is reused under the terms of https://creativecommons.org/publicdomain/zero/1.0/ Reproducibility To retrieve the Zenodo JSON it was deemed necessary to use the undocumented parts of Zenodo API. From the Zenodo source code it was identified that the REST template https://zenodo.org/api/records/{pid_value} could be used with pid_value as the numeric part from the OAI-PMH identifier, e.g. for oai:zenodo.org:1310621 the Zenodo JSON can be retrieved at https://zenodo.org/api/records/1310621. The JSON API supports content negotiation, the content-types supported as of 2019-09-20 include: application/vnd.zenodo.v1+json giving the Zenodo record in Zenodo's internal JSON schema (v1) application/ld+json giving JSON-LD Linked Data using the http://schema.org/ vocabulary application/x-datacite-v41+xml giving DataCite v4 XML application/marcxml+xml giving MARC 21 XML Using these (currently) undocumented parts of the Zenodo API thus avoids the need for HTML scraping while also giving individual complete records that are suitable to redistribute as records in a filtered dataset. This preliminary exploration will be adapted into the reproducible CWL workflow, for now included as a Bash script zenodo-records.sh Execution time was about 3 days from a server at the University of Manchester network on a single 1 GBps network link. The script does: Retrieve each of the first 3.5 million Zenodo records as Zenodo JSON by iterating over possible numeric IDs (the maximum ID 3450000 was estimated from "Recent uploads") Filter list to exclude records that are not found, moved or deleted. The presence of the key conceptrecid is used as marker. Use jq to ensure the JSON is on a single line Join the JSON files using the ASCII Record Separator (RS, 0x1e) to make a application/json-seq JSON text sequence stream Save the JSON stream as a single compressed file using xz
Facebook
TwitterThis data set was prepared by BORIS staff by processing the original vector data into raster files. The original data were received as ARC/INFO coverages or as export files from SERM. The data include information on forest parameters for the BOREAS SSA MSA. The data are stored in binary, image format files.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Rough lexicographic data for 62 Jula lexemes collected during January 2019 in western Burkina Faso by Coleman Donaldson in the following formats:
1) a .lift file (of the LIFT XML schema) exported from LexiquePro.
2) a .txt file in Toolbox format
3) a .pdf file of a formatted export from LexiquePro
4) a .rtf file of an export from LexiquePro.
5) a .pdf file of a formatted export from Toolbox's MDF tool.
6) a .rtf file of an export from Toolbox's MDF tool.
I follow the de facto official phonemic orthography synthesizing the various national standards that linguists use while also marking tone. Grave diacritics mark low tones and acute diacritics mark high tones. An unmarked vowel carries the same tone as the last marked vowel before it. Lexemes without any diacritics means I am unsure of the tone.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Historical SEC dataset containing all insider transactions (Form 4 filings). The data is public, and sourced from the SEC's EDGAR database out of their XML filings. Lightly processed for easier consumption. All Form 4 filings from Jan/20 to Jun/25
Form 4s are noisy to work with: amended filings, multiple insiders per transaction, and inconsistent tables. This dataset provides clean, normalized insider-transaction data with a stable schema so you can backtest signals and monitor insider activity without scraping.
Update cadence: monthly (moving to daily as we scale). Source: U.S. SEC EDGAR Form 4 filings.
We're building a real-time API for new filings with clean JSON endpoints and low latency. If interested, sign up to our waiting list: 👉 https://secfilingapi.com/?utm_source=kaggle&utm_medium=dataset&utm_campaign=form4
Feedback & requests welcome in the Discussion tab.
DISCLAIMER: It is possible that inaccuracies or other errors were introduced into the data sets during the process of extracting the data and compiling the data sets. The data set is intended to assist the public in analyzing data contained in Commission filings; however, they are not a substitute for such filings. Investors should review the full Commission filings before making any investment decision.
Facebook
TwitterLicence Ouverte / Open Licence 1.0https://www.etalab.gouv.fr/wp-content/uploads/2014/05/Open_Licence.pdf
License information was derived automatically
Liaisons maritimes reliant les îles et le continent gérées par la région Bretagne. Sont concernés les îles de Bréhat, Ouessant, Molène, Sein, Groix, Belle île, Houat, Hoedic et Arz.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The datasets containing metadata in MODS for the entire BHL collection (both hosted and externally linked content) can be downloaded from the following locations:
bhlitem.mods.xml bhlitem.mods.xml.zip bhlpart.mods.xml bhlpart.mods.xml.zip bhltitle.mods.xml bhltitle.mods.xml.zip
For contextual information and key definitions about this dataset see the Biodiversity Heritage Library Open Data Collection.
Data Dictionary:https://www.loc.gov/standards/mods/v3/mods-3-8.xsd Release Date: First of the month Frequency: Monthly bureauCode: 452:11 Access Level: public Rights: http://rightsstatements.org/vocab/NoC-US/