100+ datasets found
  1. d

    Data from: DAISY Benchmark Performance Data

    • catalog.data.gov
    • mhkdr.openei.org
    • +3more
    Updated May 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Washington (2025). DAISY Benchmark Performance Data [Dataset]. https://catalog.data.gov/dataset/daisy-benchmark-performance-data-cc485
    Explore at:
    Dataset updated
    May 24, 2025
    Dataset provided by
    University of Washington
    Description

    This repository contains the underlying data from benchmark experiments for Drifting Acoustic Instrumentation SYstems (DAISYs) in waves and currents described in "Performance of a Drifting Acoustic Instrumentation SYstem (DAISY) for Characterizing Radiated Noise from Marine Energy Converters" (https://link.springer.com/article/10.1007/s40722-024-00358-6). DAISYs consist of a surface expression connected to a hydrophone recording package by a tether. Both elements are instrumented to provide metadata (e.g., position, orientation, and depth). Information about how to build DAISYs is available at https://www.pmec.us/research-projects/daisy. The repository's primary content is three compressed archives (.zip format), each containing multiple MATLAB binary data files (.mat format). A table relating individual data files to figures in the paper, as well as the structure of each file, is included in the repository as a Word document (Data Description MHK-DR.docx). Most of the files contain time series information for a single DAISY deployment (file naming convention: [site]DAISY[Drift #].mat) consisting of processed hydrophone data and associated metadata. For a limited number of DAISY deployments, the hydrophone package was replaced with an acoustic Doppler velocimeter (file naming convention: [site]DAISY[Drift #]_ADV.mat). Data were collected over several years at three locations: (1) Sequim Bay at Pacific Northwest National Laboratory's Marine & Coastal Research Laboratory (MCRL) in Sequim, WA, the energetic tidal channel in Admiralty Inlet, WA (Admiralty Inlet), and the U.S. Navy's Wave Energy Test Site (WETS) in Kaneohe, HI. Brief descriptions of data files at each location follow. MCRL - (1) Drift #4 and #16 contrast the performance of a DAISY and a reference hydrophone (icListen HF Reson), respectively, in the quiescent interior of Sequim Bay (September 2020). (2) Drift #152 and #153 are velocity measurements for a drifting acoustic Doppler velocimeter in in the tidally-energetic entrance channel inside a flow shield and exposed to the flow, respectively (January 2018). (3) Two non-standard files are also included: DAISY_data.mat corresponds to a subset of a DAISY drift over an Adaptable Monitoring Package (AMP) and AMP_data.mat corresponds to approximately co-temporal data for a stationary hydrophone on the AMP (February 2019). Admiralty Inlet - (1) Drift #1-12 correspond to tests with flow shielded DAISYs, unshielded DAISYs, a reference hydrophone, and drifting acoustic Doppler velocimeter with 5, 10, and 15 m tether lengths between surface expression and hydrophone recording package (July 2022). (2) Drift #13-20 correspond to tests of flow shielded DAISYs with three different tether materials (rubber cord, nylon line, and faired nylon line) in lengths of 5, 10, and 15 m (July 2022). WETS - (1) Drift #30-32 correspond to tests with a heave plate incorporated into the tether (standard configuration for wave sites), rubber cord only, and rubber cord, but with a flow shielded hydrophone (November 2022). (2) Drift #49-58 and Drift #65-68 correspond to measurements around mooring infrastructure at the 60 m berth where time-delay-of-arrival localization was demonstrated for different DAISY arrangements and hydrophone depths (November 2022).

  2. o

    European Business Performance Database

    • openicpsr.org
    Updated Sep 15, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Youssef Cassis; Harm Schroeter; Andrea Colli (2018). European Business Performance Database [Dataset]. http://doi.org/10.3886/E106060V2
    Explore at:
    Dataset updated
    Sep 15, 2018
    Dataset provided by
    Bergen University
    Bocconi University
    EUI, Florence
    Authors
    Youssef Cassis; Harm Schroeter; Andrea Colli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    The European Business Performance database describes the performance of the largest enterprises in the twentieth century. It covers eight countries that together consistently account for above 80 per cent of western European GDP: Great Britain, Germany, France, Belgium, Italy, Spain, Sweden, and Finland. Data have been collected for five benchmark years, namely on the eve of WWI (1913), before the Great Depression (1927), at the extremes of the golden age (1954 and 1972), and in 2000.The database is comprised of two distinct datasets. The Small Sample (625 firms) includes the largest enterprises in each country across all industries (economy-wide). To avoid over-representation of certain countries and sectors, countries contribute a number of firms that is roughly proportionate to the size of the economy: 30 firms from Great Britain, 25 from Germany, 20 from France, 15 from Italy, 10 from Belgium, Spain, and Sweden, and 5 from Finland. By the same token, a cap has been set on the number of financial firms entering the sample, so that they range between up to 6 for Britain and 1 for Finland.The second dataset, or Large Sample (1,167 firms), is made up of the largest firms per industry. Here industries are so selected as to take into account long-term technological developments and the rise of entirely new products and services. Firms have been individually classified using the two-digit ISIC Rev. 3.1 codes, then grouped under a manageable number of industries. To some extent and broadly speaking, the two samples have a rather distinct focus: the Small Sample is biased in favour of sheer bigness, whereas the Large Sample emphasizes industries.As far as size and performance indicators are concerned, total assets has been picked as the main size measure in the first three benchmarks, turnover in 1972 and 2000 (financial intermediaries, though, are ranked by total assets throughout the database). Performance is gauged by means of two financial ratios, namely return on equity and shareholders’ return, i.e. the percentage year-on-year change in share price based on year-end values. In order to smooth out volatility, at each benchmark performance figures have been averaged over three consecutive years (for instance, performance in 1913 reflects average performance in 1911, 1912, and 1913).All figures were collected in national currency and converted to US dollars at current year-average exchange rates.

  3. w

    IBNET Benchmarking Database

    • wbwaterdata.org
    Updated Mar 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). IBNET Benchmarking Database [Dataset]. https://wbwaterdata.org/dataset/ibnet-benchmarking-database
    Explore at:
    Dataset updated
    Mar 18, 2020
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data on water utilities for 151 national jurisdictions, for a range of years up to and including 2017 (year range varies greatly by country and utility) on service and utility parameters (Benchmark Database) and Tariffs for 211 juristictions (Tariffs database). Information includes cost recovery, connections, population served, financial performance, non-revenue water, residential and total supply, total production. Data can be called up by utility, by group of utility, and by comparison between utilities, including the whole (global) utility database, enabling both country and global level comparison for individual utilities. Data can be downloaded in xls format.

  4. d

    Benchmarking Performance Ranges by Building Type 2015-Present

    • catalog.data.gov
    • data.seattle.gov
    • +1more
    Updated Jan 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.seattle.gov (2025). Benchmarking Performance Ranges by Building Type 2015-Present [Dataset]. https://catalog.data.gov/dataset/benchmarking-performance-ranges-by-building-type-2015-present
    Explore at:
    Dataset updated
    Jan 31, 2025
    Dataset provided by
    data.seattle.gov
    Description

    Summary energy and building characteristics by building type for non-residential and multifamily buildings greater than 20,000 square feet that benchmark energy data with the City of Seattle. This dataset summarizes information from the full 2015-2023 Building Energy Benchmarking dataset but excludes likely or known errors.

  5. f

    Performance comparison on the benchmark noisy database.

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthieu Doyen; Di Ge; Alain Beuchée; Guy Carrault; Alfredo I. Hernández (2023). Performance comparison on the benchmark noisy database. [Dataset]. http://doi.org/10.1371/journal.pone.0223785.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Matthieu Doyen; Di Ge; Alain Beuchée; Guy Carrault; Alfredo I. Hernández
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Performance comparison on the benchmark noisy database.

  6. u

    Data from: Data for "A benchmarking method to rank the performance of...

    • produccioncientifica.ucm.es
    • zenodo.org
    Updated 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gómez-Novell, Octavi; Visini, Francesco; Pace, Bruno; Álvarez Gómez, José Antonio; Herrero-Barbero, Paula; Gómez-Novell, Octavi; Visini, Francesco; Pace, Bruno; Álvarez Gómez, José Antonio; Herrero-Barbero, Paula (2024). Data for "A benchmarking method to rank the performance of physics-based earthquake simulations" [Dataset]. https://produccioncientifica.ucm.es/documentos/668fc40eb9e7c03b01bd3714
    Explore at:
    Dataset updated
    2024
    Authors
    Gómez-Novell, Octavi; Visini, Francesco; Pace, Bruno; Álvarez Gómez, José Antonio; Herrero-Barbero, Paula; Gómez-Novell, Octavi; Visini, Francesco; Pace, Bruno; Álvarez Gómez, José Antonio; Herrero-Barbero, Paula
    Description

    This repository contains the datasets and codes supplementary to the article "A benchmarking method to rank the performance of physics-based earthquake simulations" submitted to Seismological Research Letters.

    The datasets include the codes to run the ranking analyses, inputs and outputs for the RSQSim earthquake simulation cases explained in the paper: a single fault and the fault system of the Eastern Betics Shear Zone (simulations from Herrero-Barbero et al. 2021). The results and data are stored in a separate folder for each case study presented in the paper: "Single fault" and "EBSZ". Each folder contains a series of subfolders and a Python script to run the ranking analysis for that specific case study. The script contains the default path references to read all necessary input files for the analysis and automatically save all the outputs. The subfolders are:

    ./Inputs: This folder contains the input files required for the RSQSim simulations. This includes:

    a. The fault model ("Nodes_RSQSim.flt" and "EBSZ_model.csv" for the single fault and EBSZ cases, respectively), which specifies the coordinate nodes of the fault triangular meshes and fault properties such as rake (º) and slip rate (m/yr).

    b. Neighbor file ("neighbors.dat"/"neighbors.12") that contains lists of triangular patches of the fault model that are neighboring. This file is used in RSQSim.

    c. Input parameter file ("Input_Parameters.txt"): this file specifies the parameters that are variable in each catalogue. This file is just for information purposes and is not used for the calculations.

    d. Parameter file(s) to run the RSQSim calculations.

    *For the single fault, this file is common ("test_normal.in") and is updated during the calculation when executing the "Run.sh" file in the terminal when running RSQSim. This file contains a script that loops through the input parameters a, b and normal stress explored in the study and changes the input parameter file accordingly in each iteration.

    *For the EBSZ, this file is specific for each simulation ("param_EBSZ_(n).in"), as each simulation was run separately.

    e. (Only for the EBSZ case) Input paleoseismic data for the paleorate benchmark. One file ("coord_sites_EBSZ.csv") contains a list of UTM coordinates of each paleoseismic site in the EBSZ and another ("paleo_rates_EBSZ.csv") contains the mean recurrence intervals and annual paleoearthquake rates in those sites (data from Herrero-Barbero et al., 2021).

    ./Simulation_models: contains several subfolders, one for each simulated catalogue (96 for the single fault case and 11 for the EBSZ). Each subfolder contains data that is read by the ranking code to perform the analysis.

    *For the single fault, the folder names follow the structure "model_(normal stress)(a)(b)".

    *For the EBSZ, the folder names are "cat-(n)".

    ./Ranking_results: contains the outputs of the ranking analysis, which are two figures and one text file.

    *Figure 1 ("Final_ranking.pdf"): visualization of the final ranking analysis for all models against the analyzed benchmarks.

    *Figure 2 ("Parameter_sensitivity.pdf"): visualization of the final and benchmark performance versus the input parameter of the models.

    *Text file ("Ranking_results.txt"): contains the final and benchmark scores of each simulation model. This file is outputted so the user can reproduce and customize their own figures with the ranking results.

    To use the ranking codes in you own datasets, please replicate the folder structure explained above. Use the code that best suits your data: use the one for the single fault if you wish not to use the paleorate benchmarks, and use the EBSZ one if you wish to include these data in your analysis. At the beginning of the respective codes (before the "Start" block comment) you will find the variables where the file names of the fault model and paleoseismic data are indicated. Change them to adapt it to your data. There you can also assign weights to the respective benchmarks in the analysis (default is set at equal weight for all benchmarks).

    For updates of the code please visit our GitHub: https://github.com/octavigomez/Ranking-physics-based-EQ-simulations

  7. f

    Data from: Benchmark Database Containing...

    • figshare.com
    • acs.figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jean-Noël Jaubert; Yohann Le Guennec; Andrés Piña-Martinez; Nicolas Ramirez-Velez; Silvia Lasala; Bastian Schmid; Ilias K. Nikolaidis; Ioannis G. Economou; Romain Privat (2023). Benchmark Database Containing Binary-System-High-Quality-Certified Data for Cross-Comparing Thermodynamic Models and Assessing Their Accuracy [Dataset]. http://doi.org/10.1021/acs.iecr.0c01734.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    ACS Publications
    Authors
    Jean-Noël Jaubert; Yohann Le Guennec; Andrés Piña-Martinez; Nicolas Ramirez-Velez; Silvia Lasala; Bastian Schmid; Ilias K. Nikolaidis; Ioannis G. Economou; Romain Privat
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    In the last two centuries, equations of state (EoSs) have become a key tool for the correlation and prediction of thermodynamic properties of fluids. They not only can be applied to pure substances as well as to mixtures but also constitute the heart of commercially available computer-aided-process-design software. In the last 20 years, thousands of publications have been devoted to the development of sophisticated models or to the improvement of already existing EoSs. Chemical engineering thermodynamics is thus a field under steady development, and to assess the accuracy of a thermodynamic model or to cross-compare two models, it is necessary to confront model predictions with experimental data. In this context, the importance of a reliable free-to-access benchmark database is pivotal and becomes absolutely necessary. The goal of this paper is thus to present a database, specifically designed to assess the accuracy of a thermodynamic model or cross-compare models, to explain how it was developed and to enlighten how to use it. A total of 200 nonelectrolytic binary systems have been selected and divided into nine groups according to the associating character of the components, i.e., their ability to be involved in a hydrogen bond (the nature and strength of the association phenomena are indeed considered a measure of the complexity to model the thermodynamic properties of mixtures). The methodology for assessing the performance of a given model is then described. As an illustration, the Peng–Robinson EoS with classical van der Waals mixing rules and a temperature-dependent binary interaction parameter (kij) have been used to correlate the numerous data included in the proposed database, and its performance has been assessed following the proposed methodology.

  8. WaterBench-Iowa: A Large-scale Benchmark Dataset for Data-Driven Streamflow...

    • zenodo.org
    • data.niaid.nih.gov
    csv, txt, zip
    Updated Sep 18, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ibrahim Demir; Zhongrun Xiang; Bekir Z Demiray; Muhammed Sit; Ibrahim Demir; Zhongrun Xiang; Bekir Z Demiray; Muhammed Sit (2022). WaterBench-Iowa: A Large-scale Benchmark Dataset for Data-Driven Streamflow Forecasting [Dataset]. http://doi.org/10.5281/zenodo.7087806
    Explore at:
    zip, txt, csvAvailable download formats
    Dataset updated
    Sep 18, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ibrahim Demir; Zhongrun Xiang; Bekir Z Demiray; Muhammed Sit; Ibrahim Demir; Zhongrun Xiang; Bekir Z Demiray; Muhammed Sit
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Iowa
    Description

    WaterBench-Iowa is a comprehensive benchmark dataset for streamflow forecasting. It follows FAIR data principles that are prepared with a focus on convenience for utilizing in data-driven and machine learning studies and provides benchmark performance for state-of-art deep learning architectures on the dataset for comparative analysis. By aggregating the datasets of streamflow, precipitation, watershed area, slope, soil types, and evapotranspiration from federal agencies and state organizations (i.e., NASA, NOAA, USGS, and Iowa Flood Center), we provided the WaterBench for hourly streamflow forecast studies. This dataset has a high temporal and spatial resolution with rich metadata and relational information, which can be used for varieties of deep learning and machine learning research. To some extent, WaterBench makes up for the lack of a unified benchmark in earth science research. We highly encourage researchers to use the WaterBench for deep learning research in hydrology.

  9. SQL Databases for Students and Educators

    • zenodo.org
    bin, html
    Updated Oct 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mauricio Vargas Sepúlveda; Mauricio Vargas Sepúlveda (2020). SQL Databases for Students and Educators [Dataset]. http://doi.org/10.5281/zenodo.4136985
    Explore at:
    bin, htmlAvailable download formats
    Dataset updated
    Oct 28, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mauricio Vargas Sepúlveda; Mauricio Vargas Sepúlveda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Publicly accessible databases often impose query limits or require registration. Even when I maintain public and limit-free APIs, I never wanted to host a public database because I tend to think that the connection strings are a problem for the user.

    I’ve decided to host different light/medium size by using PostgreSQL, MySQL and SQL Server backends (in strict descending order of preference!).

    Why 3 database backends? I think there are a ton of small edge cases when moving between DB back ends and so testing lots with live databases is quite valuable. With this resource you can benchmark speed, compression, and DDL types.

    Please send me a tweet if you need the connection strings for your lectures or workshops. My Twitter username is @pachamaltese. See the SQL dumps on each section to have the data locally.

  10. d

    Daily streamflow performance benchmark defined by D-score (v0.1) for the...

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Daily streamflow performance benchmark defined by D-score (v0.1) for the National Water Model (v2.1) at benchmark streamflow locations [Dataset]. https://catalog.data.gov/dataset/daily-streamflow-performance-benchmark-defined-by-d-score-v0-1-for-the-national-water-mode
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This data release contains the D-score (version 0.1) daily streamflow performance benchmark results for the National Water Model (NWM) Retrospective version 2.1 computed at streamgage benchmark locations (version 1) as defined by Foks and others (2022). Model error was determined by evaluating predicted daily mean streamflow (aggregated from an hourly timestep) versus observed daily mean streamflow. Using those errors, the D-score performance benchmark computes the mean squared logarithmic error (MSLE), then decomposes the overall MSLE into orthogonal components such as bias, distribution, and sequence (Hodson and others, 2021). For easier interpretation, the MSLE components can be passed through a scoring function as described in Hodson and others (2021). References: Foks, S.S., Towler, E., Hodson, T.O., Bock, A.R., Dickinson, J.E., Dugger, A.L., Dunne, K.A., Essaid, H.I., Miles, K.A., Over, T.M., Penn, C.A., Russell, A.M., Saxe, S.W., and Simeone, C.E., 2022, Streamflow benchmark locations for conterminous United States (cobalt gages): U.S. Geological Survey data release, https://doi.org/10.5066/P972P42Z. Hodson, T.O., Over, T.M., and Foks, S.S., 2021. Mean squared error, deconstructed. Journal of Advances in Modeling Earth Systems, 13, e2021MS002681. https://doi.org/10.1029/2021MS002681.

  11. U

    Daily streamflow performance benchmark defined by D-score (v0.1) for the NHM...

    • data.usgs.gov
    • datasets.ai
    • +2more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Timothy Hodson; Sydney Foks; Krista Dunne; Katherine Miles; Thomas Over; Colin Penn; Samuel Saxe; Caelan Simeone; Erin Towler; Roland Viger; Jesse Dickinson, Daily streamflow performance benchmark defined by D-score (v0.1) for the NHM (v1 byObs Muskingum) at benchmark streamflow locations [Dataset]. http://doi.org/10.5066/P9PZLHYZ
    Explore at:
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Timothy Hodson; Sydney Foks; Krista Dunne; Katherine Miles; Thomas Over; Colin Penn; Samuel Saxe; Caelan Simeone; Erin Towler; Roland Viger; Jesse Dickinson
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Oct 1, 1983 - Dec 31, 2016
    Description

    This data release contains the D-score (version 0.1) daily streamflow performance benchmark results for the National Hydrologic Model Infrastructure application of the Precipitation-Runoff Modeling System (NHM-PRMS) version 1 "byObs" calibration with Muskingum routing (Hay and LaFontaine, 2020) computed at streamflow benchmark locations (version 1.0) as defined by Foks and others (2022). Model error was determined by evaluating predicted daily mean streamflow versus observed daily mean streamflow. Using those errors, the D-score performance benchmark computes the mean squared logarithmic error (MSLE), then decomposes the overall MSLE into orthogonal components such as bias, distribution, and sequence (Hodson and others, 2021). For easier interpretation, the MSLE components can be passed through a scoring function as described in Hodson and others (2021).

  12. U

    Daily streamflow performance benchmark defined by the standard statistical...

    • data.usgs.gov
    Updated Mar 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erin Towler; Sydney Foks; Leah Staub; Jesse Dickinson; Aubrey Dugger; Hedeff Essaid; David Gochis; Timothy Hodson; Roland Viger; Yongxin Zhang (2023). Daily streamflow performance benchmark defined by the standard statistical suite (v1.0) for the National Water Model Retrospective (v2.1) at benchmark streamflow locations for the conterminous United States (ver 3.0, March 2023) [Dataset]. http://doi.org/10.5066/P9QT1KV7
    Explore at:
    Dataset updated
    Mar 30, 2023
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Erin Towler; Sydney Foks; Leah Staub; Jesse Dickinson; Aubrey Dugger; Hedeff Essaid; David Gochis; Timothy Hodson; Roland Viger; Yongxin Zhang
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Oct 1, 1983 - Dec 31, 2016
    Area covered
    Contiguous United States, United States
    Description

    This data release contains the standard statistical suite (version 1.0) daily streamflow performance benchmark results for the National Water Model Retrospective (v2.1) at streamflow benchmark locations defined by Foks and others (2022). Modeled hourly timesteps were converted to mean daily timesteps. Model error was determined by evaluating predicted daily mean streamflow versus observed daily mean streamflow using various statistics; the Nash-Sutcliffe efficiency (NSE), the Kling-Gupta efficiency (KGE), the logNSE, the Pearson correlation coefficient, the Spearman correlation coefficient, the ratio of the standard deviation, the percent bias, the percent bias in flow duration curve midsegment slope, the percent bias in the flow duration curve high-segment volume, and the percent bias in flow duration curve low-segment volume. Two climatological KGE benchmarks are included that are calculated using daily mean streamflow observations and interannual daily mean or median flows. Add ...

  13. u

    TrafPy: Benchmarking Data Centre Network Systems

    • rdr.ucl.ac.uk
    zip
    Updated Jun 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Parsonson (2021). TrafPy: Benchmarking Data Centre Network Systems [Dataset]. http://doi.org/10.5522/04/14815853
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 30, 2021
    Dataset provided by
    University College London
    Authors
    Christopher Parsonson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set contains data related to the paper 'TrafPy: Benchmarking Data Centre Network Systems'. The data have been split into 3 files to avoid needing to download all data sets if only some are needed:1) plotData: The data plotted in the paper for each of the benchmarks averaged across 5 runs.2) trafficData: The flow-centric traffic requests used in each of the simulations.3) simulationData: Each individual benchmark run. Contains full access to the simulation history, metrics, and so on. When unzipped, this file is ~2.5 TB in size.

  14. C

    China CN: Banks' Wealth Management Product: Net-Value Performance Benchmark:...

    • ceicdata.com
    Updated Feb 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2025). China CN: Banks' Wealth Management Product: Net-Value Performance Benchmark: Fixed Income [Dataset]. https://www.ceicdata.com/en/china/banks-wealth-management-product-index-series/cn-banks-wealth-management-product-netvalue-performance-benchmark-fixed-income
    Explore at:
    Dataset updated
    Feb 15, 2025
    Dataset provided by
    CEICdata.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2021 - Dec 1, 2021
    Area covered
    China
    Description

    China Banks' Wealth Management Product: Net-Value Performance Benchmark: Fixed Income data was reported at 4.430 % in Dec 2021. This records a decrease from the previous number of 4.530 % for Nov 2021. China Banks' Wealth Management Product: Net-Value Performance Benchmark: Fixed Income data is updated monthly, averaging 4.595 % from May 2018 (Median) to Dec 2021, with 44 observations. The data reached an all-time high of 5.360 % in Jul 2018 and a record low of 4.040 % in Nov 2020. China Banks' Wealth Management Product: Net-Value Performance Benchmark: Fixed Income data remains active status in CEIC and is reported by Puyi Standard. The data is categorized under China Premium Database’s Financial Market – Table CN.ZAM: Banks' Wealth Management Product: Index Series.

  15. PQC Algorithms Benchmark Data

    • kaggle.com
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rana Tariq (2025). PQC Algorithms Benchmark Data [Dataset]. https://www.kaggle.com/datasets/ranatariq09/pqc-algorithms-benchmark-data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 15, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Rana Tariq
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    Dataset Descriptions

    These datasets contain comprehensive benchmarking results for various Post-Quantum Cryptography (PQC) algorithms, specifically Key Encapsulation Mechanisms (KEM) and Digital Signature algorithms, using the Open Quantum Safe (OQS) library.

    The performance was measured over a range of random text/message sizes (from 0.5 KB to 128 KB). For each algorithm, key cryptographic operations were evaluated, including key generation, encapsulation/signature, decapsulation/verification, correctness checks, execution times, and ciphertext or signature overhead.

    1. KEM Benchmark Dataset (kem_benchmark_results.csv) This dataset evaluates the performance of KEM algorithms for secure key exchange operations.

    Column Name Description algorithm Name of the KEM algorithm tested (e.g., Kyber512, FrodoKEM-640-AES). type Type of cryptographic algorithm ("KEM"). text_size_kb Size of the random message (dummy data) in kilobytes (KB). text_length_bytes Size of the random message in bytes. keygen_time_ms Time taken (in milliseconds) to generate the public/private key pair. encap_time_ms Time taken (in milliseconds) to encapsulate (encrypt) the shared secret. decap_time_ms Time taken (in milliseconds) to decapsulate (decrypt) the ciphertext to recover the shared secret. ciphertext_length Length of the ciphertext (encapsulated secret) in bytes. shared_secret_length Length of the shared secret generated in bytes. overhead_bytes Additional overhead bytes (ciphertext length - shared secret length). total_time_ms Total time (in milliseconds) for keygen + encapsulation + decapsulation. correctness Boolean indicating if shared secret was correctly recovered (True/False). run_id Sequential identifier for each experiment run. timestamp Date and time (ISO format, UTC) when the measurement was recorded. security_level_bits Security strength in bits (128, 192, or 256-bit security). security_level Numerical encoding for security level (1 for 128 bits, 2 for 192 bits, 3 for 256 bits). error (optional) Error message if benchmarking failed for any reason (e.g., algorithm initialization failure). 2. Digital Signature Benchmark Dataset (signature_benchmark_results.csv) This dataset evaluates digital signature algorithms for authentication and integrity verification.

    Column Name Description algorithm Name of the digital signature algorithm tested (e.g., Dilithium2, Falcon-512). type Type of cryptographic algorithm ("Signature"). text_size_kb Size of the random message to be signed in kilobytes (KB). text_length_bytes Size of the random message in bytes. keygen_time_ms Time taken (in milliseconds) to generate public/private key pair. sign_time_ms Time taken (in milliseconds) to sign the message. verify_time_ms Time taken (in milliseconds) to verify the generated signature. signature_length Length of the generated digital signature in bytes. overhead_bytes Additional overhead bytes (signature length - message length). total_time_ms Total time (in milliseconds) for keygen + signing + verification operations. correctness Boolean indicating if the signature was correctly verified (True/False). run_id Sequential identifier for each experiment run. timestamp Date and time (ISO format, UTC) when the measurement was recorded. security_level_bits Security strength in bits (128, 192, or 256-bit security). security_level Numerical encoding for security level (1 for 128 bits, 2 for 192 bits, 3 for 256 bits). error (optional) Error message if benchmarking failed for any reason (e.g., algorithm initialization failure). Use Cases and Applications These datasets serve as valuable resources for:

    Evaluating computational performance of PQC algorithms. Training machine learning models for predicting cryptographic operation performance. Comparing overheads and latency in realistic secure communication scenarios. Assisting in algorithm selection based on resource constraints and performance requirements.

  16. China CN: WMCP: Average Performance Benchmark: On Sale Close-end: City...

    • ceicdata.com
    Updated Feb 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com (2025). China CN: WMCP: Average Performance Benchmark: On Sale Close-end: City Commercial Bank [Dataset]. https://www.ceicdata.com/en/china/puyi-standard-average-performance-benchmark-on-sale-wealth-management-company-product/cn-wmcp-average-performance-benchmark-on-sale-closeend-city-commercial-bank
    Explore at:
    Dataset updated
    Feb 15, 2025
    Dataset provided by
    CEIC Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2024 - Feb 1, 2025
    Area covered
    China
    Description

    China WMCP: Average Performance Benchmark: On Sale Close-end: City Commercial Bank data was reported at 2.800 % in Feb 2025. This records a decrease from the previous number of 2.840 % for Jan 2025. China WMCP: Average Performance Benchmark: On Sale Close-end: City Commercial Bank data is updated monthly, averaging 3.540 % from Aug 2022 (Median) to Feb 2025, with 31 observations. The data reached an all-time high of 4.370 % in Sep 2022 and a record low of 2.800 % in Feb 2025. China WMCP: Average Performance Benchmark: On Sale Close-end: City Commercial Bank data remains active status in CEIC and is reported by Puyi Standard. The data is categorized under China Premium Database’s Financial Market – Table CN.ZAM: Puyi Standard: Average Performance Benchmark: On Sale: Wealth Management Company Product.

  17. d

    DC Energy Benchmarking

    • catalog.data.gov
    • opendata.dc.gov
    Updated Feb 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Washington, DC (2025). DC Energy Benchmarking [Dataset]. https://catalog.data.gov/dataset/dc-energy-benchmarking
    Explore at:
    Dataset updated
    Feb 5, 2025
    Dataset provided by
    City of Washington, DC
    Area covered
    Washington
    Description

    Energy benchmarking means tracking a building's energy and water use and using a standard metric to compare the building's performance against past performance and to its peers nationwide. These companions have been shown to drive energy efficiency upgrades and increase occupancy rates and property values. The Clean and Affordable Energy Act of 2008 (CAEA) requires that owners of all large private buildings (over 50,000 gross square feet) annually benchmark their energy and water efficiency and report the results to DOEE for public disclosure. The District government also must annually benchmark and disclose the energy and water efficiency of District government buildings over 10,000 gross square feet. Starting with the calendar year 2021 data (due April 1, 2022) all privately-owned buildings over 25,000 square feet will be required to benchmark, and starting with the calendar year 2024 data (due April 1, 2025) all privately-owned buildings over 10,000 square feet will be required to benchmark, as mandated under the Clean Energy DC Omnibus Act of 2018.

  18. speed.pypy.org database dump

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carl Friedrich Bolz; Maciej Fiałkowski; The PyPy Team (2023). speed.pypy.org database dump [Dataset]. http://doi.org/10.6084/m9.figshare.1517608.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Authors
    Carl Friedrich Bolz; Maciej Fiałkowski; The PyPy Team
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A database dump of the http://speed.pypy.org data, made on August 23, 2015.

  19. China CN: WMWMP: Average Performance Benchmark: On Sale Open-end: City...

    • ceicdata.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CEICdata.com, China CN: WMWMP: Average Performance Benchmark: On Sale Open-end: City Commercial Bank [Dataset]. https://www.ceicdata.com/en/china/puyi-standard-average-performance-benchmark-on-sale-whole-market-wealth-management-product/cn-wmwmp-average-performance-benchmark-on-sale-openend-city-commercial-bank
    Explore at:
    Dataset provided by
    CEIC Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 1, 2024 - Feb 1, 2025
    Area covered
    China
    Description

    China WMWMP: Average Performance Benchmark: On Sale Open-end: City Commercial Bank data was reported at 2.540 % in Mar 2025. This records a decrease from the previous number of 2.560 % for Feb 2025. China WMWMP: Average Performance Benchmark: On Sale Open-end: City Commercial Bank data is updated monthly, averaging 3.385 % from Aug 2022 (Median) to Mar 2025, with 32 observations. The data reached an all-time high of 3.830 % in Aug 2022 and a record low of 2.540 % in Mar 2025. China WMWMP: Average Performance Benchmark: On Sale Open-end: City Commercial Bank data remains active status in CEIC and is reported by Puyi Standard. The data is categorized under China Premium Database’s Financial Market – Table CN.ZAM: Puyi Standard: Average Performance Benchmark: On Sale: Whole Market Wealth Management Product.

  20. n

    GGG-BenchmarkSfM: Dataset for Benchmarking Close-range SfM Software...

    • narcis.nl
    • data.mendeley.com
    Updated Aug 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nikolov, I (via Mendeley Data) (2020). GGG-BenchmarkSfM: Dataset for Benchmarking Close-range SfM Software Performance under Varying Capturing Conditions [Dataset]. http://doi.org/10.17632/bzxk2n78s9.3
    Explore at:
    Dataset updated
    Aug 10, 2020
    Dataset provided by
    Data Archiving and Networked Services (DANS)
    Authors
    Nikolov, I (via Mendeley Data)
    Description

    The proposed dataset aims to benchmark the performance of SfM software under varying conditions - different environments, different lighting, image positions, camera setups, etc. Images of six objects are provided with varying shapes, sizes, surface textures and materials. The dataset is divided in two main parts, together with ReadMe files: - Objects and environments data - images from each of the objects both from indoor and outdoor environments are provided. - Capturing setups data - images from one of the objects are provided captured with different setups. Both with and without using a turntable, using one and multiple light sources and different amount of images

    All images are captured using Canon 6D DSLR camera. All images contain EXIF data with used camera parameters. A ground truth high resolution scanned of each of the objects is provided for verifying the accuracy of the SfM reconstructions.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
University of Washington (2025). DAISY Benchmark Performance Data [Dataset]. https://catalog.data.gov/dataset/daisy-benchmark-performance-data-cc485

Data from: DAISY Benchmark Performance Data

Related Article
Explore at:
Dataset updated
May 24, 2025
Dataset provided by
University of Washington
Description

This repository contains the underlying data from benchmark experiments for Drifting Acoustic Instrumentation SYstems (DAISYs) in waves and currents described in "Performance of a Drifting Acoustic Instrumentation SYstem (DAISY) for Characterizing Radiated Noise from Marine Energy Converters" (https://link.springer.com/article/10.1007/s40722-024-00358-6). DAISYs consist of a surface expression connected to a hydrophone recording package by a tether. Both elements are instrumented to provide metadata (e.g., position, orientation, and depth). Information about how to build DAISYs is available at https://www.pmec.us/research-projects/daisy. The repository's primary content is three compressed archives (.zip format), each containing multiple MATLAB binary data files (.mat format). A table relating individual data files to figures in the paper, as well as the structure of each file, is included in the repository as a Word document (Data Description MHK-DR.docx). Most of the files contain time series information for a single DAISY deployment (file naming convention: [site]DAISY[Drift #].mat) consisting of processed hydrophone data and associated metadata. For a limited number of DAISY deployments, the hydrophone package was replaced with an acoustic Doppler velocimeter (file naming convention: [site]DAISY[Drift #]_ADV.mat). Data were collected over several years at three locations: (1) Sequim Bay at Pacific Northwest National Laboratory's Marine & Coastal Research Laboratory (MCRL) in Sequim, WA, the energetic tidal channel in Admiralty Inlet, WA (Admiralty Inlet), and the U.S. Navy's Wave Energy Test Site (WETS) in Kaneohe, HI. Brief descriptions of data files at each location follow. MCRL - (1) Drift #4 and #16 contrast the performance of a DAISY and a reference hydrophone (icListen HF Reson), respectively, in the quiescent interior of Sequim Bay (September 2020). (2) Drift #152 and #153 are velocity measurements for a drifting acoustic Doppler velocimeter in in the tidally-energetic entrance channel inside a flow shield and exposed to the flow, respectively (January 2018). (3) Two non-standard files are also included: DAISY_data.mat corresponds to a subset of a DAISY drift over an Adaptable Monitoring Package (AMP) and AMP_data.mat corresponds to approximately co-temporal data for a stationary hydrophone on the AMP (February 2019). Admiralty Inlet - (1) Drift #1-12 correspond to tests with flow shielded DAISYs, unshielded DAISYs, a reference hydrophone, and drifting acoustic Doppler velocimeter with 5, 10, and 15 m tether lengths between surface expression and hydrophone recording package (July 2022). (2) Drift #13-20 correspond to tests of flow shielded DAISYs with three different tether materials (rubber cord, nylon line, and faired nylon line) in lengths of 5, 10, and 15 m (July 2022). WETS - (1) Drift #30-32 correspond to tests with a heave plate incorporated into the tether (standard configuration for wave sites), rubber cord only, and rubber cord, but with a flow shielded hydrophone (November 2022). (2) Drift #49-58 and Drift #65-68 correspond to measurements around mooring infrastructure at the 60 m berth where time-delay-of-arrival localization was demonstrated for different DAISY arrangements and hydrophone depths (November 2022).

Search
Clear search
Close search
Google apps
Main menu