100+ datasets found
  1. NIST Computational Chemistry Comparison and Benchmark Database - SRD 101

    • catalog.data.gov
    • data.amerigeoss.org
    • +1more
    Updated Jul 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NIST Computational Chemistry Comparison and Benchmark Database - SRD 101 [Dataset]. https://catalog.data.gov/dataset/nist-computational-chemistry-comparison-and-benchmark-database-srd-101-e19c1
    Explore at:
    Dataset updated
    Jul 9, 2025
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of gas-phase molecules. The goals are to provide a benchmark set of experimental data for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of gas-phase thermochemical properties. The data files linked to this record are a subset of the experimental data present in the CCCBDB.

  2. w

    IBNET Benchmarking Database

    • wbwaterdata.org
    Updated Mar 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). IBNET Benchmarking Database [Dataset]. https://wbwaterdata.org/dataset/ibnet-benchmarking-database
    Explore at:
    Dataset updated
    Mar 18, 2020
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data on water utilities for 151 national jurisdictions, for a range of years up to and including 2017 (year range varies greatly by country and utility) on service and utility parameters (Benchmark Database) and Tariffs for 211 juristictions (Tariffs database). Information includes cost recovery, connections, population served, financial performance, non-revenue water, residential and total supply, total production. Data can be called up by utility, by group of utility, and by comparison between utilities, including the whole (global) utility database, enabling both country and global level comparison for individual utilities. Data can be downloaded in xls format.

  3. Data from: Benchmark AFLOW Data Sets for Machine Learning

    • figshare.com
    zip
    Updated Mar 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Conrad Clement; Steven Kauwe; Taylor Sparks (2020). Benchmark AFLOW Data Sets for Machine Learning [Dataset]. http://doi.org/10.6084/m9.figshare.11954742.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 8, 2020
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Conrad Clement; Steven Kauwe; Taylor Sparks
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Materials informatics is increasingly finding ways to exploit machine learning algorithms. Techniques such as decision trees, ensemble methods, support vector machines, and a variety of neural network architectures are used to predict likely material characteristics and property values. Supplemented with laboratory synthesis, applications of machine learning to compound discovery and characterization represent one of the most promising research directions in materials informatics. A shortcoming of this trend, in its current form, is a lack of standardized materials data sets on which to train, validate, and test model effectiveness. Applied machine learning research depends on benchmark data to make sense of its results. Fixed, predetermined data sets allow for rigorous model assessment and comparison. Machine learning publications that don't refer to benchmarks are often hard to contextualize and reproduce. In this data descriptor article, we present a collection of data sets of different material properties taken from the AFLOW database. We describe them, the procedures that generated them, and their use as potential benchmarks. We provide a compressed ZIP file containing the data sets, and a GitHub repository of associated Python code. Finally, we discuss opportunities for future work incorporating the data sets and creating similar benchmark collections.

  4. d

    Global Data Literacy Benchmark

    • datatothepeople.org
    Updated Aug 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data To The People (2020). Global Data Literacy Benchmark [Dataset]. https://www.datatothepeople.org/gdlb
    Explore at:
    Dataset updated
    Aug 14, 2020
    Dataset authored and provided by
    Data To The People
    Description

    Dataset enabling organizations to benchmark their data literacy capability globally.

  5. f

    Data from: Benchmark Database Containing...

    • figshare.com
    • acs.figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jean-Noël Jaubert; Yohann Le Guennec; Andrés Piña-Martinez; Nicolas Ramirez-Velez; Silvia Lasala; Bastian Schmid; Ilias K. Nikolaidis; Ioannis G. Economou; Romain Privat (2023). Benchmark Database Containing Binary-System-High-Quality-Certified Data for Cross-Comparing Thermodynamic Models and Assessing Their Accuracy [Dataset]. http://doi.org/10.1021/acs.iecr.0c01734.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    ACS Publications
    Authors
    Jean-Noël Jaubert; Yohann Le Guennec; Andrés Piña-Martinez; Nicolas Ramirez-Velez; Silvia Lasala; Bastian Schmid; Ilias K. Nikolaidis; Ioannis G. Economou; Romain Privat
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    In the last two centuries, equations of state (EoSs) have become a key tool for the correlation and prediction of thermodynamic properties of fluids. They not only can be applied to pure substances as well as to mixtures but also constitute the heart of commercially available computer-aided-process-design software. In the last 20 years, thousands of publications have been devoted to the development of sophisticated models or to the improvement of already existing EoSs. Chemical engineering thermodynamics is thus a field under steady development, and to assess the accuracy of a thermodynamic model or to cross-compare two models, it is necessary to confront model predictions with experimental data. In this context, the importance of a reliable free-to-access benchmark database is pivotal and becomes absolutely necessary. The goal of this paper is thus to present a database, specifically designed to assess the accuracy of a thermodynamic model or cross-compare models, to explain how it was developed and to enlighten how to use it. A total of 200 nonelectrolytic binary systems have been selected and divided into nine groups according to the associating character of the components, i.e., their ability to be involved in a hydrogen bond (the nature and strength of the association phenomena are indeed considered a measure of the complexity to model the thermodynamic properties of mixtures). The methodology for assessing the performance of a given model is then described. As an illustration, the Peng–Robinson EoS with classical van der Waals mixing rules and a temperature-dependent binary interaction parameter (kij) have been used to correlate the numerous data included in the proposed database, and its performance has been assessed following the proposed methodology.

  6. d

    Elevation Benchmarks

    • catalog.data.gov
    • data.cityofchicago.org
    • +3more
    Updated Dec 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.cityofchicago.org (2023). Elevation Benchmarks [Dataset]. https://catalog.data.gov/dataset/elevation-benchmarks
    Explore at:
    Dataset updated
    Dec 2, 2023
    Dataset provided by
    data.cityofchicago.org
    Description

    The following dataset includes "Active Benchmarks," which are provided to facilitate the identification of City-managed standard benchmarks. Standard benchmarks are for public and private use in establishing a point in space. Note: The benchmarks are referenced to the Chicago City Datum = 0.00, (CCD = 579.88 feet above mean tide New York). The City of Chicago Department of Water Management’s (DWM) Topographic Benchmark is the source of the benchmark information contained in this online database. The information contained in the index card system was compiled by scanning the original cards, then transcribing some of this information to prepare a table and map. Over time, the DWM will contract services to field verify the data and update the index card system and this online database.This dataset was last updated September 2011. Coordinates are estimated. To view map, go to https://data.cityofchicago.org/Buildings/Elevation-Benchmarks-Map/kmt9-pg57 or for PDF map, go to http://cityofchicago.org/content/dam/city/depts/water/supp_info/Benchmarks/BMMap.pdf. Please read the Terms of Use: http://www.cityofchicago.org/city/en/narr/foia/data_disclaimer.html.

  7. i

    Bayesian Network benchmark Datasets and mixed data

    • ieee-dataport.org
    Updated Sep 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruijing Cui (2023). Bayesian Network benchmark Datasets and mixed data [Dataset]. https://ieee-dataport.org/documents/bayesian-network-benchmark-datasets-and-mixed-data
    Explore at:
    Dataset updated
    Sep 6, 2023
    Authors
    Ruijing Cui
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contains the benchmark Bayesian network dataset

  8. f

    Performance comparison on the benchmark noisy database.

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthieu Doyen; Di Ge; Alain Beuchée; Guy Carrault; Alfredo I. Hernández (2023). Performance comparison on the benchmark noisy database. [Dataset]. http://doi.org/10.1371/journal.pone.0223785.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Matthieu Doyen; Di Ge; Alain Beuchée; Guy Carrault; Alfredo I. Hernández
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Performance comparison on the benchmark noisy database.

  9. agentic-data-access-benchmark

    • huggingface.co
    Updated Nov 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hasura (2024). agentic-data-access-benchmark [Dataset]. https://huggingface.co/datasets/hasura/agentic-data-access-benchmark
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 1, 2024
    Dataset provided by
    Hasura, Inc.
    Authors
    Hasura
    Description

    Agentic Data Access Benchmark (ADAB)

    Agentic Data Access Benchmark is a set of real-world questions over few "closed domains" to illustrate the evaluation of closed domain AI assistants/agents. Closed domains are domains where data is not available implicitly in the LLM as they reside in secure or private systems e.g. enterprise databases, SaaS applications, etc and AI solutions require mechanisms to connect an LLM to such data. If you are evaluating an AI product or building your… See the full description on the dataset page: https://huggingface.co/datasets/hasura/agentic-data-access-benchmark.

  10. NADA-SynShapes: A synthetic shape benchmark for testing probabilistic deep...

    • zenodo.org
    text/x-python, zip
    Updated Apr 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Giulio Del Corso; Giulio Del Corso; Volpini Federico; Volpini Federico; Claudia Caudai; Claudia Caudai; Davide Moroni; Davide Moroni; Sara Colantonio; Sara Colantonio (2025). NADA-SynShapes: A synthetic shape benchmark for testing probabilistic deep learning models [Dataset]. http://doi.org/10.5281/zenodo.15194187
    Explore at:
    zip, text/x-pythonAvailable download formats
    Dataset updated
    Apr 16, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Giulio Del Corso; Giulio Del Corso; Volpini Federico; Volpini Federico; Claudia Caudai; Claudia Caudai; Davide Moroni; Davide Moroni; Sara Colantonio; Sara Colantonio
    License

    Attribution-NonCommercial-NoDerivs 2.5 (CC BY-NC-ND 2.5)https://creativecommons.org/licenses/by-nc-nd/2.5/
    License information was derived automatically

    Time period covered
    Dec 18, 2024
    Description

    NADA (Not-A-Database) is an easy-to-use geometric shape data generator that allows users to define non-uniform multivariate parameter distributions to test novel methodologies. The full open-source package is provided at GIT:NA_DAtabase. See Technical Report for details on how to use the provided package.

    This database includes 3 repositories:

    • NADA_Dis: Is the model able to correctly characterize/Disentangle a complex latent space?
      The repository contains 3x100,000 synthetic black and white images to test the ability of the models to correctly define a proper latent space (e.g., autoencoders) and disentangle it. The first 100,000 images contain 4 shapes and uniform parameter space distributions, while the other images have a more complex underlying distribution (truncated Gaussian and correlated marginal variables).

    • NADA_OOD: Does the model identify Out-Of-Distribution images?
      The repository contains 100,000 training images (4 different shapes with 3 possible colors located in the upper left corner of the canvas) and 6x100,000 increasingly different sets of images (changing the color class balance, reducing the radius of the shape, moving the shape to the lower left corner) providing increasingly challenging out-of-distribution images.
      This can help to test not only the capability of a model, but also methods that produce reliability estimates and should correctly classify OOD elements as "unreliable" as they are far from the original distributions.

    • NADA_AlEp: Does the model distinguish between different types (Aleatoric/Epistemic) of uncertainties?
      The repository contains 5x100,000 images with different type of noise/uncertainties:
      • NADA_AlEp_0_Clean: Dataset clean of noise to use as a possible training set.
      • NADA_AlEp_1_White_Noise: Epistemic white noise dataset. Each image is perturbed with an amount of white noise randomly sampled from 0% to 90%.
      • NADA_AlEp_2_Deformation: Dataset with Epistemic deformation noise. Each image is deformed by a randomly amount uniformly sampled between 0% and 90%. 0% corresponds to the original image, while 100% is a full deformation to the circumscribing circle.
      • NADA_AlEp_3_Label: Dataset with label noise. Formally, 20% of Triangles of a given color are missclassified as a Square with a random color (among Blue, Orange, and Brown) and viceversa (Squares to Triangles). Label noise introduces \textit{Aleatoric Uncertainty} because it is inherent in the data and cannot be reduced.
      • NADA_AlEp_4_Combined: Combined dataset with all previous sources of uncertainty.

    Each image can be used for classification (shape/color) or regression (radius/area) tasks.

    All datasets can be modified and adapted to the user's research question using the included open source data generator.

  11. m

    Benchmark data sets

    • data.mendeley.com
    • narcis.nl
    Updated Dec 27, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haonan Tong (2017). Benchmark data sets [Dataset]. http://doi.org/10.17632/923xvkk5mm.1
    Explore at:
    Dataset updated
    Dec 27, 2017
    Authors
    Haonan Tong
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A total of 12 software defect data sets from NASA were used in this study, where five data sets (part I) including CM1, JM1, KC1, KC2, and PC1 are obtained from PROMISE software engineering repository (http://promise.site.uottawa.ca/SERepository/), the other seven data sets (part II) are obtained from tera-PROMISE Repository (http://openscience.us/repo/defect/mccabehalsted/).

  12. d

    Data from: DAISY Benchmark Performance Data

    • catalog.data.gov
    • mhkdr.openei.org
    • +3more
    Updated May 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Washington (2025). DAISY Benchmark Performance Data [Dataset]. https://catalog.data.gov/dataset/daisy-benchmark-performance-data-cc485
    Explore at:
    Dataset updated
    May 24, 2025
    Dataset provided by
    University of Washington
    Description

    This repository contains the underlying data from benchmark experiments for Drifting Acoustic Instrumentation SYstems (DAISYs) in waves and currents described in "Performance of a Drifting Acoustic Instrumentation SYstem (DAISY) for Characterizing Radiated Noise from Marine Energy Converters" (https://link.springer.com/article/10.1007/s40722-024-00358-6). DAISYs consist of a surface expression connected to a hydrophone recording package by a tether. Both elements are instrumented to provide metadata (e.g., position, orientation, and depth). Information about how to build DAISYs is available at https://www.pmec.us/research-projects/daisy. The repository's primary content is three compressed archives (.zip format), each containing multiple MATLAB binary data files (.mat format). A table relating individual data files to figures in the paper, as well as the structure of each file, is included in the repository as a Word document (Data Description MHK-DR.docx). Most of the files contain time series information for a single DAISY deployment (file naming convention: [site]DAISY[Drift #].mat) consisting of processed hydrophone data and associated metadata. For a limited number of DAISY deployments, the hydrophone package was replaced with an acoustic Doppler velocimeter (file naming convention: [site]DAISY[Drift #]_ADV.mat). Data were collected over several years at three locations: (1) Sequim Bay at Pacific Northwest National Laboratory's Marine & Coastal Research Laboratory (MCRL) in Sequim, WA, the energetic tidal channel in Admiralty Inlet, WA (Admiralty Inlet), and the U.S. Navy's Wave Energy Test Site (WETS) in Kaneohe, HI. Brief descriptions of data files at each location follow. MCRL - (1) Drift #4 and #16 contrast the performance of a DAISY and a reference hydrophone (icListen HF Reson), respectively, in the quiescent interior of Sequim Bay (September 2020). (2) Drift #152 and #153 are velocity measurements for a drifting acoustic Doppler velocimeter in in the tidally-energetic entrance channel inside a flow shield and exposed to the flow, respectively (January 2018). (3) Two non-standard files are also included: DAISY_data.mat corresponds to a subset of a DAISY drift over an Adaptable Monitoring Package (AMP) and AMP_data.mat corresponds to approximately co-temporal data for a stationary hydrophone on the AMP (February 2019). Admiralty Inlet - (1) Drift #1-12 correspond to tests with flow shielded DAISYs, unshielded DAISYs, a reference hydrophone, and drifting acoustic Doppler velocimeter with 5, 10, and 15 m tether lengths between surface expression and hydrophone recording package (July 2022). (2) Drift #13-20 correspond to tests of flow shielded DAISYs with three different tether materials (rubber cord, nylon line, and faired nylon line) in lengths of 5, 10, and 15 m (July 2022). WETS - (1) Drift #30-32 correspond to tests with a heave plate incorporated into the tether (standard configuration for wave sites), rubber cord only, and rubber cord, but with a flow shielded hydrophone (November 2022). (2) Drift #49-58 and Drift #65-68 correspond to measurements around mooring infrastructure at the 60 m berth where time-delay-of-arrival localization was demonstrated for different DAISY arrangements and hydrophone depths (November 2022).

  13. d

    Benchmark

    • catalog.data.gov
    • data.brla.gov
    • +1more
    Updated Feb 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.brla.gov (2024). Benchmark [Dataset]. https://catalog.data.gov/dataset/benchmark-3b4b6
    Explore at:
    Dataset updated
    Feb 2, 2024
    Dataset provided by
    data.brla.gov
    Description

    Point geometry with attributes displaying geodetic control stations (benchmarks) in East Baton Rouge Parish, Louisiana.

  14. PatchMAN BSA filtering databases and benchmark input and natives

    • zenodo.org
    application/gzip, zip
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ora Schueler-Furman; Ora Schueler-Furman; Alisa Khramushin; Alisa Khramushin; Julia Kornélia Varga; Julia Kornélia Varga (2024). PatchMAN BSA filtering databases and benchmark input and natives [Dataset]. http://doi.org/10.5281/zenodo.13118411
    Explore at:
    application/gzip, zipAvailable download formats
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ora Schueler-Furman; Ora Schueler-Furman; Alisa Khramushin; Alisa Khramushin; Julia Kornélia Varga; Julia Kornélia Varga
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the list of unbound receptors, peptides and natives that was used for PatchMAN BSA filtering paper.

    It also containts the databases that are used 1) search with MASTER, 2) extraction of fragments with MASTER.

  15. f

    Benchmark Results: DBpedia 50%

    • figshare.com
    zip
    Updated Apr 28, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Felix Conrads; Jens Lehmann; Muhammad Saleem; Mohamed Morsey; Axel-Cyrille Ngonga Ngomo (2017). Benchmark Results: DBpedia 50% [Dataset]. http://doi.org/10.6084/m9.figshare.3205435.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 28, 2017
    Dataset provided by
    figshare
    Authors
    Felix Conrads; Jens Lehmann; Muhammad Saleem; Mohamed Morsey; Axel-Cyrille Ngonga Ngomo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Results of the IGUANA Benchmark in 2015/16 for the truncated DBpedia dataset. The dataset is 50% of the initial 100% dataset.

  16. LDBC-SNB SF-0001 and SF-0003 Datasets

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Jan 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arnau Prat-Pérez; Arnau Prat-Pérez (2020). LDBC-SNB SF-0001 and SF-0003 Datasets [Dataset]. http://doi.org/10.5281/zenodo.3452106
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jan 21, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Arnau Prat-Pérez; Arnau Prat-Pérez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This datasets generated with the LDBC SNB Data generator.

    https://github.com/ldbc/ldbc_snb_datagen

    It corresponds to Scale Factors 1 and 3. They are used in the following paper:

    An early look at the LDBC social network benchmark's business intelligence workload

    10.1145/3210259.3210268

  17. Benchmarking data and outputs for CLASSIC v. 1.0

    • zenodo.org
    application/gzip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joe R. Melton; Joe R. Melton; Lina Teckentrup; Matthew Fortier; Lina Teckentrup; Matthew Fortier (2020). Benchmarking data and outputs for CLASSIC v. 1.0 [Dataset]. http://doi.org/10.5281/zenodo.3525336
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Joe R. Melton; Joe R. Melton; Lina Teckentrup; Matthew Fortier; Lina Teckentrup; Matthew Fortier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CLASSIC v. 1.0 model inputs and outputs for benchmarking

    This dataset is used by scripts in the CLASSIC codebase along with the CLASSIC Singularity software container. Please ensure you obtain them prior to using this dataset. Instructions are provided on the CLASSIC Quick Start Guide.

    This dataset contains FLUXNET2015 data that is used to benchmark the Canadian Land Surface Scheme including Biogeochemical Cycles (CLASSIC) v. 1.0. All model inputs required for the (31 for version 1.0) FLUXNET sites are provided along with example outputs that benchmark CLASSIC v. 1.0. The model outputs include raw model outputs, plots of select variables and benchmarking results from the Automated Model Benchmarking (AMBER) package. Following the the CLASSIC Quick Start Guide will generate all outputs on the user's own machine.

    This work used eddy covariance data acquired and shared by the FLUXNET community, including these networks: AmeriFlux, AfriFlux, AsiaFlux, CarboAfrica, CarboEuropeIP, CarboItaly, CarboMont, ChinaFlux, Fluxnet-Canada, GreenGrass, ICOS, KoFlux, LBA, NECC, OzFlux-TERN, TCOS-Siberia, and USCCC. The ERA-Interim reanalysis data are provided by ECMWF and processed by LSCE. The FLUXNET eddy covariance data processing and harmonization was carried out by the European Fluxes Database Cluster, AmeriFlux Management Project, and Fluxdata project of FLUXNET, with the support of CDIAC and ICOS Ecosystem Thematic Center, and the OzFlux, ChinaFlux and AsiaFlux offices.

    We thank C. Le Quéré for allowing us to distribute her CO2 record that was originally made for the TRENDY project.

  18. d

    Chicago Energy Benchmarking

    • catalog.data.gov
    • cloud.csiss.gmu.edu
    • +5more
    Updated Feb 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.cityofchicago.org (2025). Chicago Energy Benchmarking [Dataset]. https://catalog.data.gov/dataset/chicago-energy-benchmarking
    Explore at:
    Dataset updated
    Feb 7, 2025
    Dataset provided by
    data.cityofchicago.org
    Area covered
    Chicago
    Description

    The Chicago Building Energy Use Benchmarking Ordinance calls on existing municipal, commercial, and residential buildings larger than 50,000 square feet to track whole-building energy use, report to the City annually, and verify data accuracy every three years. The law, which was phased in from 2014-2017, covers less than 1% of Chicago’s buildings, which account for approximately 20% of total energy used by all buildings. For more details, including ordinance text, rules and regulations, and timing, please visit www.CityofChicago.org/EnergyBenchmarking The ordinance authorizes the City to share property-specific information with the public, beginning with the second year in which a building is required to comply. The dataset represents self-reported and publicly-available property information by calendar year. Please note that the "Data Year" column refers to the year to which the data apply, not the year in which they were reported. That column and filtered views under "Related Content" can be used to isolate specific years.

  19. i

    Data from: Big Data Machine Learning Benchmark on Spark

    • ieee-dataport.org
    Updated Jun 6, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jairson Rodrigues (2019). Big Data Machine Learning Benchmark on Spark [Dataset]. https://ieee-dataport.org/open-access/big-data-machine-learning-benchmark-spark
    Explore at:
    Dataset updated
    Jun 6, 2019
    Authors
    Jairson Rodrigues
    Description

    net traffic

  20. Additive Manufacturing Benchmark 2022 Schema

    • catalog.data.gov
    • datasets.ai
    Updated Mar 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2025). Additive Manufacturing Benchmark 2022 Schema [Dataset]. https://catalog.data.gov/dataset/additive-manufacturing-benchmark-2022-schema-41490
    Explore at:
    Dataset updated
    Mar 14, 2025
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    This resource is the implementation in XML Schema [1] of a data model that describes the Additive Manufacturing Benchmark 2022 series data. It provides a robust set of metadata for the build processes and their resulting specimens and for measurements made on these in the context of the AM Bench 2022 project.The schema was designed to support typical science questions which users of a database with metadata about the AM Bench results might wish to pose. The metadata include identifiers assigned to build products, derived specimens, and measurements; links to relevant journal publications, documents, and illustrations; provenance of specimens such as source materials and details of the build process; measurement geometry, instruments and other configurations used in measurements; and access information to raw and processed data as well as analysis descriptions of these datasets.This data model is an abstraction of these metadata, designed using the concepts of inheritance, normalization, and reusability of an object oriented language for ease of extensibility and maintenance. It is simple to incorporate new metadata as needed.A CDCS [2] database at NIST was filled with metadata provided by the contributors to the AM Bench project. They entered values for the metadata fields for an AM Bench measurement, specimen or build process in tabular spreadsheets. These entries were translated to XML documents compliant with the schema using a set of python scripts. The generated XML documents were loaded into the database with a persistent identifier (PID) assigned by the database.[1] https://www.w3.org/XML/Schema[2] https://www.nist.gov/itl/ssd/information-systems-group/configurable-data-curation-system-cdcs/about-cdcs

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
NIST Computational Chemistry Comparison and Benchmark Database - SRD 101 [Dataset]. https://catalog.data.gov/dataset/nist-computational-chemistry-comparison-and-benchmark-database-srd-101-e19c1
Organization logo

NIST Computational Chemistry Comparison and Benchmark Database - SRD 101

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 9, 2025
Dataset provided by
National Institute of Standards and Technologyhttp://www.nist.gov/
Description

The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of gas-phase molecules. The goals are to provide a benchmark set of experimental data for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of gas-phase thermochemical properties. The data files linked to this record are a subset of the experimental data present in the CCCBDB.

Search
Clear search
Close search
Google apps
Main menu