The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of gas-phase molecules. The goals are to provide a benchmark set of experimental data for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of gas-phase thermochemical properties. The data files linked to this record are a subset of the experimental data present in the CCCBDB.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data on water utilities for 151 national jurisdictions, for a range of years up to and including 2017 (year range varies greatly by country and utility) on service and utility parameters (Benchmark Database) and Tariffs for 211 juristictions (Tariffs database). Information includes cost recovery, connections, population served, financial performance, non-revenue water, residential and total supply, total production. Data can be called up by utility, by group of utility, and by comparison between utilities, including the whole (global) utility database, enabling both country and global level comparison for individual utilities. Data can be downloaded in xls format.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Performance comparison on the benchmark noisy database.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the last two decades, alignment analyses have become an important technique in quantitative historical linguistics and dialectology. Phonetic alignment plays a crucial role in the identification of regular sound correspondences and deeper genealogical relations between and within languages and language families. Surprisingly, up to today, there are no easily accessible benchmark data sets for phonetic alignment analyses. Here we present a publicly available database of manually edited phonetic alignments which can serve as a platform for testing and improving the performance of automatic alignment algorithms. The database consists of a great variety of alignments drawn from a large number of different sources. The data is arranged in a such way that typical problems encountered in phonetic alignment analyses (metathesis, diversity of phonetic sequences) are represented and can be directly tested.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
you can see our case study results by using this built database to select components and benchmark the bridgeless buck-boost PFC converters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Berlin SPARQL Benchmark (BSBM) is a suite of benchmarks built around an e-commerce use case [1]. We generated 21 versions of the dataset with different scale factors. The first dataset, with a scale factor of 100, contains about 7,000 vertices and 75,000 edges. We generated versions with scale factors between 2,000 and 40,000 in steps of 2,000. The largest dataset contains about 1.3 M vertices and 13 M edges. For our experiments in [2], we first use the different versions ordered from smallest to largest (version 0 to 20) to simulate a growing graph database. Subsequently, we reverse the order to emulate a shrinking graph database. Over all versions, the mean degree is 8.1 (+- 0.5), the mean in-degree is 4.6 (+- 0.3), and the mean out-degree is 9.8 (+- 0.2).
1. Christian Bizer, Andreas Schultz: The Berlin SPARQL Benchmark. Int. J. Semantic Web Inf. Syst. 5(2): 1-24 (2009)
2. Till Blume, David Richerby, Ansgar Scherp: Incremental and Parallel Computation of Structural Graph Summaries for Evolving Graphs. CIKM 2020: 75-84
The following dataset includes "Active Benchmarks," which are provided to facilitate the identification of City-managed standard benchmarks. Standard benchmarks are for public and private use in establishing a point in space. Note: The benchmarks are referenced to the Chicago City Datum = 0.00, (CCD = 579.88 feet above mean tide New York). The City of Chicago Department of Water Management’s (DWM) Topographic Benchmark is the source of the benchmark information contained in this online database. The information contained in the index card system was compiled by scanning the original cards, then transcribing some of this information to prepare a table and map. Over time, the DWM will contract services to field verify the data and update the index card system and this online database.This dataset was last updated September 2011. Coordinates are estimated. To view map, go to https://data.cityofchicago.org/Buildings/Elevation-Benchmarks-Map/kmt9-pg57 or for PDF map, go to http://cityofchicago.org/content/dam/city/depts/water/supp_info/Benchmarks/BMMap.pdf. Please read the Terms of Use: http://www.cityofchicago.org/city/en/narr/foia/data_disclaimer.html.
Attribution-NonCommercial-NoDerivs 2.5 (CC BY-NC-ND 2.5)https://creativecommons.org/licenses/by-nc-nd/2.5/
License information was derived automatically
NADA (Not-A-Database) is an easy-to-use geometric shape data generator that allows users to define non-uniform multivariate parameter distributions to test novel methodologies. The full open-source package is provided at GIT:NA_DAtabase. See Technical Report for details on how to use the provided package.
This database includes 3 repositories:
Each image can be used for classification (shape/color) or regression (radius/area) tasks.
All datasets can be modified and adapted to the user's research question using the included open source data generator.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of the IGUANA Benchmark in 2015/16 for the truncated DBpedia dataset. The dataset is 50% of the initial 100% dataset.
The Benchmark Energy & Geometry Database (BEGDB) collects results of highly accurate quantum mechanics (QM) calculations of molecular structures, energies and properties. These data can serve as benchmarks for testing and parameterization of other computational methods.
This resource is the implementation in XML Schema [1] of a data model that describes the Additive Manufacturing Benchmark 2022 series data. It provides a robust set of metadata for the build processes and their resulting specimens and for measurements made on these in the context of the AM Bench 2022 project.The schema was designed to support typical science questions which users of a database with metadata about the AM Bench results might wish to pose. The metadata include identifiers assigned to build products, derived specimens, and measurements; links to relevant journal publications, documents, and illustrations; provenance of specimens such as source materials and details of the build process; measurement geometry, instruments and other configurations used in measurements; and access information to raw and processed data as well as analysis descriptions of these datasets.This data model is an abstraction of these metadata, designed using the concepts of inheritance, normalization, and reusability of an object oriented language for ease of extensibility and maintenance. It is simple to incorporate new metadata as needed.A CDCS [2] database at NIST was filled with metadata provided by the contributors to the AM Bench project. They entered values for the metadata fields for an AM Bench measurement, specimen or build process in tabular spreadsheets. These entries were translated to XML documents compliant with the schema using a set of python scripts. The generated XML documents were loaded into the database with a persistent identifier (PID) assigned by the database.[1] https://www.w3.org/XML/Schema[2] https://www.nist.gov/itl/ssd/information-systems-group/configurable-data-curation-system-cdcs/about-cdcs
This dataset contains two Wi-Fi databases (one for training and one for test/estimation purposes in indoor positioning applications), collected in a crowdsourced mode (i.e., via 21 different devices and different users), together with a benchmarking utility software (in Matlab and Python) to illustrate various algorithms of indoor positioning based solely on WiFi information (MAC addresses and RSS values). The data was collected in a 4-floor university building in Tampere, Finland, during Jan-Aug 2017 and it comprises 687 training fingerprints and 3951 test or estimation fingerprints. 13.10.2017: Version 2 uploaded; the revised version contains improved readme files and improved Python SW. The dataset and/or the associated software are to be cited as follows: E.S. Lohan, J. Torres-Sospedra, P. Richter, H. Leppäkoski, J. Huerta, A. Cramariuc, “Crowdsourced WiFi-fingerprinting database and benchmark software for indoor positioning”, Zenodo repository, DOI 10.5281/zenodo.889798
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Benchmark is a Point FeatureClass representing land-surveyed benchmarks in Cupertino. Benchmarks are stable sites used to provide elevation data. It is primarily used as a reference layer. The layer is updated as needed by the GIS department. Benchmark has the following fields:
OBJECTID: Unique identifier automatically generated by Esri type: OID, length: 4, domain: none
ID: Unique identifier assigned to the Benchmark type: Integer, length: 4, domain: none
REF_MARK: The reference mark associated with the Benchmark type: String, length: 10, domain: none
ELEV: The elevation of the Benchmark type: Double, length: 8, domain: none
Shape: Field that stores geographic coordinates associated with feature type: Geometry, length: 4, domain: none
Description: A more detailed description of the Benchmark type: String, length: 200, domain: none
Owner: The owner of the Benchmark type: String, length: 10, domain: none
GlobalID: Unique identifier automatically generated for features in enterprise database type: GlobalID, length: 38, domain: none Operator:
The user responsible for updating this database type: String, length: 255, domain: OPERATOR
last_edited_date: The date the database row was last updated type: Date, length: 8, domain: none
created_date: The date the database row was initially created type: Date, length: 8, domain: none
VerticalDatum: The vertical datum associated with the Benchmarktype: String, length: 100, domain: none
The IARPA Janus Benchmark A (IJB-A) database is developed with the aim to augment more challenges to the face recognition task by collecting facial images with a wide variations in pose, illumination, expression, resolution and occlusion. IJB-A is constructed by collecting 5,712 images and 2,085 videos from 500 identities, with an average of 11.4 images and 4.2 videos per identity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The European Business Performance database describes the performance of the largest enterprises in the twentieth century. It covers eight countries that together consistently account for above 80 per cent of western European GDP: Great Britain, Germany, France, Belgium, Italy, Spain, Sweden, and Finland. Data have been collected for five benchmark years, namely on the eve of WWI (1913), before the Great Depression (1927), at the extremes of the golden age (1954 and 1972), and in 2000.The database is comprised of two distinct datasets. The Small Sample (625 firms) includes the largest enterprises in each country across all industries (economy-wide). To avoid over-representation of certain countries and sectors, countries contribute a number of firms that is roughly proportionate to the size of the economy: 30 firms from Great Britain, 25 from Germany, 20 from France, 15 from Italy, 10 from Belgium, Spain, and Sweden, and 5 from Finland. By the same token, a cap has been set on the number of financial firms entering the sample, so that they range between up to 6 for Britain and 1 for Finland.The second dataset, or Large Sample (1,167 firms), is made up of the largest firms per industry. Here industries are so selected as to take into account long-term technological developments and the rise of entirely new products and services. Firms have been individually classified using the two-digit ISIC Rev. 3.1 codes, then grouped under a manageable number of industries. To some extent and broadly speaking, the two samples have a rather distinct focus: the Small Sample is biased in favour of sheer bigness, whereas the Large Sample emphasizes industries.As far as size and performance indicators are concerned, total assets has been picked as the main size measure in the first three benchmarks, turnover in 1972 and 2000 (financial intermediaries, though, are ranked by total assets throughout the database). Performance is gauged by means of two financial ratios, namely return on equity and shareholders’ return, i.e. the percentage year-on-year change in share price based on year-end values. In order to smooth out volatility, at each benchmark performance figures have been averaged over three consecutive years (for instance, performance in 1913 reflects average performance in 1911, 1912, and 1913).All figures were collected in national currency and converted to US dollars at current year-average exchange rates.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Usefulness of metadata in the automatic version of ACMANTv5 was tested.
A benchmark database has been developed, which consists of 41 datasets
of 20,500 networks of 170,000 synthetic monthly temperature time series
and the relating metadata dates. The research was supported by the
Catalan Meteorological Service. The research results will be published
in the open access MDPI journal Atmosphere.
See more in the "Readme.txt" file of the dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the list of unbound receptors, peptides and natives that was used for PatchMAN BSA filtering paper.
It also containts the databases that are used 1) search with MASTER, 2) extraction of fragments with MASTER.
bibtex ref @article{li2024can, title={Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls}, author={Li, Jinyang and Hui, Binyuan and Qu, Ge and Yang, Jiaxi and Li, Binhua and Li, Bowen and Wang, Bailin and Qin, Bowen and Geng, Ruiying and Huo, Nan and others}, journal={Advances in Neural Information Processing Systems}, volume={36}, year={2024} }
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Investments in infrastructure have been on the development agenda of Latin American and Caribbean (LCR) countries as they move towards economic and social progress. Investing in infrastructure is investing in human welfare by providing access to and quality basic infrastructure services. Improving the performance of the electricity sector is one such major infrastructure initiative and the focus of this benchmarking data. A key initiative for both public and private owned distribution utilities has been to upgrade their efficiency as well as to increase the coverage and quality of service. In order to accomplish this goal, this initiative serves as a clearing house for information regarding the country and utility level performance of electricity distribution sector. This initiative allows countries and utilities to benchmark their performance in relation to other comparator utilities and countries. In doing so, this benchmarking data contributes to the improvement of the electricity sector by filling in knowledge gaps for the identification of the best performers (and practices) of the region. This benchmarking database consists of detailed information of 25 countries and 249 utilities in the region. The data collected for this benchmarking project is representative of 88 percent of the electrification in the region. Through in-house and field data collection, consultants compiled data based on accomplishments in output, coverage, input, labor productivity, operating performance, the quality of service, prices, and ownership. By serving as a mirror of good performance, the report allows for a comparative analysis and the ranking of utilities and countries according to the indicators used to measure performance. Although significant efforts have been made to ensure data comparability and consistency across time and utilities, the World Bank and the ESMAP do not guarantee the accuracy of the data included in this work. Acknowledgement: This benchmarking database was prepared by a core team consisting of Luis Alberto Andres (Co-Task Team Leader), Jose Luis Guasch (Co-Task Team Leader), Julio A. Gonzalez, Georgeta Dragoiu, and Natalie Giannelli. The team was benefited by data contributions from Jordan Z. Schwartz (Senior Infrastructure Specialist, LCSTR), Lucio Monari (Lead Energy Economist, LCSEG), Katharina B. Gassner (Senior Economist, FEU), and Martin Rossi (consultant). Funding was provided by the Energy Sector Management Assistance Program (ESMAP) and the World Bank. Comments and suggestion are welcome by contacting Luis Andres (landres@worldbank.org)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Results of the IGUANA Benchmark in 2015/16 for the initial 100% DBpedia dataset.
The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of gas-phase molecules. The goals are to provide a benchmark set of experimental data for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of gas-phase thermochemical properties. The data files linked to this record are a subset of the experimental data present in the CCCBDB.