5 datasets found
  1. Z

    Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Dec 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lignos, Dimitrios G. (2022). Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6965146
    Explore at:
    Dataset updated
    Dec 24, 2022
    Dataset provided by
    de Castro e Sousa, Albano
    Hartloper, Alexander R.
    Lignos, Dimitrios G.
    Ozden, Selimcan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials

    Background

    This dataset contains data from monotonic and cyclic loading experiments on structural metallic materials. The materials are primarily structural steels and one iron-based shape memory alloy is also included. Summary files are included that provide an overview of the database and data from the individual experiments is also included.

    The files included in the database are outlined below and the format of the files is briefly described. Additional information regarding the formatting can be found through the post-processing library (https://github.com/ahartloper/rlmtp/tree/master/protocols).

    Usage

    The data is licensed through the Creative Commons Attribution 4.0 International.

    If you have used our data and are publishing your work, we ask that you please reference both:

    this database through its DOI, and

    any publication that is associated with the experiments. See the Overall_Summary and Database_References files for the associated publication references.

    Included Files

    Overall_Summary_2022-08-25_v1-0-0.csv: summarises the specimen information for all experiments in the database.

    Summarized_Mechanical_Props_Campaign_2022-08-25_v1-0-0.csv: summarises the average initial yield stress and average initial elastic modulus per campaign.

    Unreduced_Data-#_v1-0-0.zip: contain the original (not downsampled) data

    Where # is one of: 1, 2, 3, 4, 5, 6. The unreduced data is broken into separate archives because of upload limitations to Zenodo. Together they provide all the experimental data.

    We recommend you un-zip all the folders and place them in one "Unreduced_Data" directory similar to the "Clean_Data"

    The experimental data is provided through .csv files for each test that contain the processed data. The experiments are organised by experimental campaign and named by load protocol and specimen. A .pdf file accompanies each test showing the stress-strain graph.

    There is a "db_tag_clean_data_map.csv" file that is used to map the database summary with the unreduced data.

    The computed yield stresses and elastic moduli are stored in the "yield_stress" directory.

    Clean_Data_v1-0-0.zip: contains all the downsampled data

    The experimental data is provided through .csv files for each test that contain the processed data. The experiments are organised by experimental campaign and named by load protocol and specimen. A .pdf file accompanies each test showing the stress-strain graph.

    There is a "db_tag_clean_data_map.csv" file that is used to map the database summary with the clean data.

    The computed yield stresses and elastic moduli are stored in the "yield_stress" directory.

    Database_References_v1-0-0.bib

    Contains a bibtex reference for many of the experiments in the database. Corresponds to the "citekey" entry in the summary files.

    File Format: Downsampled Data

    These are the "LP_

    The header of the first column is empty: the first column corresponds to the index of the sample point in the original (unreduced) data

    Time[s]: time in seconds since the start of the test

    e_true: true strain

    Sigma_true: true stress in MPa

    (optional) Temperature[C]: the surface temperature in degC

    These data files can be easily loaded using the pandas library in Python through:

    import pandas data = pandas.read_csv(data_file, index_col=0)

    The data is formatted so it can be used directly in RESSPyLab (https://github.com/AlbanoCastroSousa/RESSPyLab). Note that the column names "e_true" and "Sigma_true" were kept for backwards compatibility reasons with RESSPyLab.

    File Format: Unreduced Data

    These are the "LP_

    The first column is the index of each data point

    S/No: sample number recorded by the DAQ

    System Date: Date and time of sample

    Time[s]: time in seconds since the start of the test

    C_1_Force[kN]: load cell force

    C_1_Déform1[mm]: extensometer displacement

    C_1_Déplacement[mm]: cross-head displacement

    Eng_Stress[MPa]: engineering stress

    Eng_Strain[]: engineering strain

    e_true: true strain

    Sigma_true: true stress in MPa

    (optional) Temperature[C]: specimen surface temperature in degC

    The data can be loaded and used similarly to the downsampled data.

    File Format: Overall_Summary

    The overall summary file provides data on all the test specimens in the database. The columns include:

    hidden_index: internal reference ID

    grade: material grade

    spec: specifications for the material

    source: base material for the test specimen

    id: internal name for the specimen

    lp: load protocol

    size: type of specimen (M8, M12, M20)

    gage_length_mm_: unreduced section length in mm

    avg_reduced_dia_mm_: average measured diameter for the reduced section in mm

    avg_fractured_dia_top_mm_: average measured diameter of the top fracture surface in mm

    avg_fractured_dia_bot_mm_: average measured diameter of the bottom fracture surface in mm

    fy_n_mpa_: nominal yield stress

    fu_n_mpa_: nominal ultimate stress

    t_a_deg_c_: ambient temperature in degC

    date: date of test

    investigator: person(s) who conducted the test

    location: laboratory where test was conducted

    machine: setup used to conduct test

    pid_force_k_p, pid_force_t_i, pid_force_t_d: PID parameters for force control

    pid_disp_k_p, pid_disp_t_i, pid_disp_t_d: PID parameters for displacement control

    pid_extenso_k_p, pid_extenso_t_i, pid_extenso_t_d: PID parameters for extensometer control

    citekey: reference corresponding to the Database_References.bib file

    yield_stress_mpa_: computed yield stress in MPa

    elastic_modulus_mpa_: computed elastic modulus in MPa

    fracture_strain: computed average true strain across the fracture surface

    c,si,mn,p,s,n,cu,mo,ni,cr,v,nb,ti,al,b,zr,sn,ca,h,fe: chemical compositions in units of %mass

    file: file name of corresponding clean (downsampled) stress-strain data

    File Format: Summarized_Mechanical_Props_Campaign

    Meant to be loaded in Python as a pandas DataFrame with multi-indexing, e.g.,

    tab1 = pd.read_csv('Summarized_Mechanical_Props_Campaign_' + date + version + '.csv', index_col=[0, 1, 2, 3], skipinitialspace=True, header=[0, 1], keep_default_na=False, na_values='')

    citekey: reference in "Campaign_References.bib".

    Grade: material grade.

    Spec.: specifications (e.g., J2+N).

    Yield Stress [MPa]: initial yield stress in MPa

    size, count, mean, coefvar: number of experiments in campaign, number of experiments in mean, mean value for campaign, coefficient of variation for campaign

    Elastic Modulus [MPa]: initial elastic modulus in MPa

    size, count, mean, coefvar: number of experiments in campaign, number of experiments in mean, mean value for campaign, coefficient of variation for campaign

    Caveats

    The files in the following directories were tested before the protocol was established. Therefore, only the true stress-strain is available for each:

    A500

    A992_Gr50

    BCP325

    BCR295

    HYP400

    S460NL

    S690QL/25mm

    S355J2_Plates/S355J2_N_25mm and S355J2_N_50mm

  2. S

    iSCAPE Outdoor Sensor Deployment Data

    • data.subak.org
    • zenodo.org
    csv
    Updated Feb 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IAAC-Fablab BCN (2023). iSCAPE Outdoor Sensor Deployment Data [Dataset]. https://data.subak.org/dataset/iscape-outdoor-sensor-deployment-data
    Explore at:
    csvAvailable download formats
    Dataset updated
    Feb 16, 2023
    Dataset provided by
    IAAC-Fablab BCN
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset Description

    This dataset contains all the deployment data using low cost sensors during the iSCAPE project. The dataset is divided in a series of deployments, each of them described on a yaml file with the test name. Each csv file contains time series data of each experiment, and the yaml files contain the lists of devices used in each test. The tests are described in the comment of the yaml file, and are meant to be self explanatory. Two different types of tests are herein presented:

    1. Intervention monitoring tests (Guildford and Surrey). Two intervention monitoring results were conducted in the sites of Surrey (UoS) Living Lab and Hasselt (UH) Living Lab. The intervention in Surrey aimed at characterising the behaviour of green infrastructure and the effect on the pollutants dispersion next to traffic conditions. Two different sets of two stations were delivered and deployed, one set in the vicinity of Stoke Park, and the other in the vicinity of Sutherland Memorial Park (both in Guildford - UK). In the case of Hasselt, two Living Lab Stations were deployed. The first one was used to assess pollutant concentrations in the Bassischool Kuringen in Hasselt. The other station was deployed near the University of Hasselt.
    2. Sensor Calibration tests (CSIC, Dublin and Bologna). The tests conducted in Bologna (by UNIBO, 2018), Dublin (by UCD, 2019) and Barcelona (by IAAC, 2019) were intended as an assessment of the sensor technology in an outdoor environment scenario, by co-locating the iSCAPE LLSs with reference instrumentation.

    A complete description of these datasets and the result of their analysis is shown in D7.8 of iSCAPE which can be found in this url: https://www.iscapeproject.eu/results/.

    Sensors

    The sensors used are herein referred as Citizen Kits or Smart Citizen Kits, and the Living Lab Station or Smart Citizen Station. These are a set of modular hardware components that feature a selection of low cost sensors for environmental monitoring listed below. The Smart Citizen Station is meant to expand the capabilities of the Smart Citizen Kit, aiming to measure pollutants with more advanced sensors. The hardware is licensed under CERN Open Hardware License V1.2 and is fully described in the HardwareX Open Access publication: https://doi.org/10.1016/j.ohx.2019.e00070. The sensor documentation can be found at https://docs.smartcitizen.me and with this DOI at Zenodo: https://doi.org/10.5281/zenodo.2555029.

    In the list below, the different sensors for the Citizen Kits are detailed, and their [CHANNELS] in the csv files above linked.

    • Air temperature (ºC): Sensirion SHT-31 [TEMP]
    • Relative Humidity (%rh): Sensirion SHT-31 [HUM]
    • Noise level (dBA): Invensense ICS-434342 [NOISE_A]
    • Ambient light (lux): Rohm BH1721FVC [LIGHT]
    • Barometric pressure (kPa): NXP MPL3115A26 [PRESS]
    • Particulate Matter PM 1 / 2.5 / 10 (µg/m3) Planttower PMS 5003 [EXT_PM_1,EXT_PM_25,EXT_PM_10]

    In the list below, the different sensors for the Citizen Kits are detailed, and their [CHANNELS] in the csv files above linked.

    • Air Temperature (ºC) Sensirion SHT-31 [TEMP]
    • Relative Humidity (% REL) Sensirion SHT-31 [HUM]
    • Noise Level (dBA) Invensense ICS-434342 [NOISE_A]
    • Ambient Light (Lux) Rohm BH1721FVC [LIGHT]
    • Barometric pressure and AMSL (Pa and Meters) NXP MPL3115A26 [PRESS]
    • Carbon Monoxide (µg/m3 (Periodic Baseline Calibration Required) SGX MICS-4514 [NA]
    • Nitrogen Dioxide (µg/m3 (Periodic Baseline Calibration Required) SGX MICS-4514 [NA]
    • Carbon Monoxide (ppm) Alphasense CO-B4 [GB_1W, GB_1A, final calculated valueCO_DELTAS_OVL_X-XX-XX - all the same]
    • Nitrogen Dioxide (ppb) Alphasense NO2-B43F [GB_2W, GB_2A, final calculated value NO2_DELTAS_OVL_0-30-50 or NO2_DELTAS_OVL_0-5-50]
    • Ozone (ppb) Alphasense OX-B431 [GB_3W, GB_3A, final value O3_DELTAS_OVL_0-30-50 or O3_DELTAS_OVL_0-5-50]
    • Gases Board Temperature (ºC) Sensirion SHT-31 [GB_TEMP] or [EXT_TEMP]
    • Gases Board Rel. Humidity (% REL) Sensirion SHT-31 [GB_HUM] or [EXT_HUM]
    • PM 1 (µg/m3) Plantower PMS5003 [EXT_PM_1] or [EXT_PM_A_1], [EXT_PM_B_1] for each PM sensor in the case of the Living Lab Station
    • PM 2.5 (µg/m3) Plantower PMS5003 [EXT_PM_25] or [EXT_PM_A_25], [EXT_PM_B_25] for each PM sensor in the case of the Living Lab Station
    • PM 10 (µg/m3) Plantower PMS5003 [EXT_PM_10] or [EXT_PM_A_10], [EXT_PM_B_10] for each PM sensor in the case of the Living Lab Station
    • PN between 0.3um<0.5um particle size (#/l) Plantower PMS5003 [EXT_PN_03] or [EXT_PN_A_03], [EXT_PN_B_03] for each PM sensor in the case of the Living Lab Station
    • PN between 0.5um<1um particle size (#/l) Plantower PMS5003 [EXT_PN_05] or [EXT_PN_A_05], [EXT_PN_B_05] for each PM sensor in the case of the Living Lab Station
    • PN between 1m<2.5um particle size (#/l) Plantower PMS5003 [EXT_PN_1] or [EXT_PN_A_1], [EXT_PN_B_1] for each PM sensor in the case of the Living Lab Station
    • PN between 2.5m<5um particle size (#/l) Plantower PMS5003 [EXT_PN_25] or [EXT_PN_A_25], [EXT_PN_B_25] for each PM sensor in the case of the Living Lab Station
    • PN between 5m<10um particle size (#/l) Plantower PMS5003 [EXT_PN_5] or [EXT_PN_A_5], [EXT_PN_B_5] for each PM sensor in the case of the Living Lab Station
    • PN between >10um particle size (#/l) Plantower PMS5003 [EXT_PN_10] or [EXT_PN_A_10], [EXT_PN_B_10] for each PM sensor in the case of the Living Lab Station

    The files with the _processed suffix, are processed files which:

    1. Resample the data using pandas resampling with mean() - reference here

    2. Clean NaN and wrong readings.

    3. Add calculations for electrochemical sensors based on this methodology

    How to find the data

    Each yaml file contains the description of a test. Each test is comprised of recordings of several devices in the same location and during the same period. Each yaml file is comprised of the following fields:

    • author: who has been in charge of performing the test (internal reference - not relevant)
    • comment: describing in general terms what was done in the test, and with what purpose
    • commit: the firmware commit (in the case of Smart Citizen devices) with which the test was performed, for development purposes only
    • devices: a descriptor containing different fields for traceability (below)
    • id: the test name
    • project: within the test was performed, in this case it is always iscape
    • report: if there is any report analysing the test
    • type_test: indoor, oudoor test or other.

    Description of devices entry

    For each device that was used in the test, two generic types are used:

    • low cost sensors (type: STATION or KIT)
    • high end sensors (type: REFERENCE)

    For low cost Smart Citizen sensors, the fields are:

    • alphasense: electrochemical sensors device ids, by pollutant (for manufacturer calibration) and slots in which they were placed
    • device_id: device id in Smartcitizen API
    • fileNameInfo: not used
    • fileNameProc: (only if source = csv is specified) 2019-03_EXT_UCD_URBAN_BACKGROUND_API_CITY_COUNCIL_REF.csv
    • fileNameRaw: (only if source = csv is used) raw file name
    • frequency: original recording frequency
    • location: for timezone correction only, not accurate
    • max_date: last recording date
    • min_date: first recording date
    • name: self-explanatory
    • pm_sensor: if there was a pm sensor connected (all of them are PMS5003 if no sensor is specified)
    • source: api or csv
    • type: STATION (KIT + Alphasense + PM board with two PMS5003) or KIT
    • version: smartcitizen hardware version

    For high end sensors, the fields are:

    • channels: which channels the device was recording for internal convertion
      • names: which are the columns in the csv file
      • pollutants: which pollutants do they respectively refer to
      • units: the units of these pollutants
    • equipment: the brand of the analyser
    • fileNameProc: same as above
    • fileNameRaw: same as above
    • index: format in which the timeindex is done, for parsing purposes
      • format: (example '%Y-%m-%d %H:%M:%S')
      • frequency: frequency at which the device was recorded
      • name: column name
    • location: same as above
    • name: name of the device
    • type: REFERENCE (always for these devices)
    • source: csv

    iSCAPE Dataset Reference Numbers:

    The datasets here presented are related to the following iSCAPE dataset reference numbers:

    • DS_TS_049
    • DS_TS_050
    • DS_TS_051
    • DS_TS_052
    • DS_TS_053
    • DS_TS_055
    • DS_TS_056
    • DS_TS_057
    • DS_TS_058
  3. Data from: BSRN solar radiation data for the testing, validation and...

    • zenodo.org
    • investigacion.ujaen.es
    • +1more
    bin
    Updated Feb 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jose A Ruiz-Arias; Jose A Ruiz-Arias (2024). BSRN solar radiation data for the testing, validation and benchmarking of solar irradiance components separation models [Dataset]. http://doi.org/10.5281/zenodo.10593079
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jose A Ruiz-Arias; Jose A Ruiz-Arias
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset is an excerpt of the validation dataset used in:

    Ruiz-Arias JA, Gueymard CA. Review and performance benchmarking of 1-min solar irradiance components separation methods: The critical role of dynamically-constrained sky conditions. Submitted for publication to Renewable and Sustainable Energy Reviews.

    and it is ready to use in the Python package splitting_models developed during that research. See the documentation in the Python package for usage details. Below, there is a detailed description of the dataset.

    The data is in a single parquet file that contains 1-min time series of solar geometry, clear-sky solar irradiance simulations, solar irradiance observations and CAELUS sky types for 5 BSRN sites, one per primary Köppen-Geiger climate, namely: Minamitorishima (mnm), JP, for equatorial climate; Alice Springs (asp), AU, for dry climate; Carpentras (car), FR, for temperate climate; Bondville (bon), US, for continental climate; and Sonnblick (son), AT, for cold/polar/snow climate. It includes one calendar year per site. The BSRN data is publicly available. See download instructions in https://bsrn.awi.de/data.

    The specific variables included in the dataset are:

    • climate: primary Köppen-Geiger climate. Values are: A (equatorial), B (dry), C (temperate), D (continental) and E (polar/snow).
    • longitude: longitude, in degrees east.
    • latitude: latitude, in degrees north.
    • sza: solar zenith angle, in degrees.
    • eth: extraterrestrial solar irradiance (i.e., top of atmosphere solar irradiance), in W/m2.
    • ghics: clear-sky global solar irradiance, in W/m2. It is evaluated with the SPARTA clear-sky model and MERRA-2 clear-sky atmosphere.
    • difcs: clear-sky diffuse solar irradiance, in W/m2.It is evaluated with the SPARTA clear-sky model and MERRA-2 clear-sky atmosphere.
    • ghicda: clean-and-dry clear-sky global solar irradiance, in W/m2. It is evaluated with the SPARTA clear-sky model and MERRA-2 clear-sky atmosphere, prescribing zero aerosols and zero precipitable water.
    • ghi: observed global horizontal irradiance, in W/m2.
    • dif: observed diffuse irradiance, in W/m2.
    • sky_type: CAELUS sky type. Values are: 1 (unknown), 2 (overcast), 3 (thick clouds), 4 (scattered clouds), 5 (thin clouds), 6 (cloudless) and 7 (cloud enhancement).

    The dataset can be easily loaded in a Python Pandas DataFrame as follows:

    import pandas as pd

    data = pd.read_parquet(

    The dataframe has a multi-index with two levels: times_utc and site. The former are the UTC timestamps at the center of each 1-min interval. The latter is each site's label.

  4. A Replication Dataset for Fundamental Frequency Estimation

    • zenodo.org
    • live.european-language-grid.eu
    • +1more
    bin
    Updated Jun 29, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bastian Bechtold; Bastian Bechtold (2020). A Replication Dataset for Fundamental Frequency Estimation [Dataset]. http://doi.org/10.5281/zenodo.3904389
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 29, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bastian Bechtold; Bastian Bechtold
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Part of the dissertation Pitch of Voiced Speech in the Short-Time Fourier Transform: Algorithms, Ground Truths, and Evaluation Methods.
    © 2020, Bastian Bechtold. All rights reserved.

    Estimating the fundamental frequency of speech remains an active area of research, with varied applications in speech recognition, speaker identification, and speech compression. A vast number of algorithms for estimatimating this quantity have been proposed over the years, and a number of speech and noise corpora have been developed for evaluating their performance. The present dataset contains estimated fundamental frequency tracks of 25 algorithms, six speech corpora, two noise corpora, at nine signal-to-noise ratios between -20 and 20 dB SNR, as well as an additional evaluation of synthetic harmonic tone complexes in white noise.

    The dataset also contains pre-calculated performance measures both novel and traditional, in reference to each speech corpus’ ground truth, the algorithms’ own clean-speech estimate, and our own consensus truth. It can thus serve as the basis for a comparison study, or to replicate existing studies from a larger dataset, or as a reference for developing new fundamental frequency estimation algorithms. All source code and data is available to download, and entirely reproducible, albeit requiring about one year of processor-time.

    Included Code and Data

    • ground truth data.zip is a JBOF dataset of fundamental frequency estimates and ground truths of all speech files in the following corpora:
      • CMU-ARCTIC (consensus truth) [1]
      • FDA (corpus truth and consensus truth) [2]
      • KEELE (corpus truth and consensus truth) [3]
      • MOCHA-TIMIT (consensus truth) [4]
      • PTDB-TUG (corpus truth and consensus truth) [5]
      • TIMIT (consensus truth) [6]
    • noisy speech data.zip is a JBOF datasets of fundamental frequency estimates of speech files mixed with noise from the following corpora:
    • synthetic speech data.zip is a JBOF dataset of fundamental frequency estimates of synthetic harmonic tone complexes in white noise.
    • noisy_speech.pkl and synthetic_speech.pkl are pickled Pandas dataframes of performance metrics derived from the above data for the following list of fundamental frequency estimation algorithms:
    • noisy speech evaluation.py and synthetic speech evaluation.py are Python programs to calculate the above Pandas dataframes from the above JBOF datasets. They calculate the following performance measures:
      • Gross Pitch Error (GPE), the percentage of pitches where the estimated pitch deviates from the true pitch by more than 20%.
      • Fine Pitch Error (FPE), the mean error of grossly correct estimates.
      • High/Low Octave Pitch Error (OPE), the percentage pitches that are GPEs and happens to be at an integer multiple of the true pitch.
      • Gross Remaining Error (GRE), the percentage of pitches that are GPEs but not OPEs.
      • Fine Remaining Bias (FRB), the median error of GREs.
      • True Positive Rate (TPR), the percentage of true positive voicing estimates.
      • False Positive Rate (FPR), the percentage of false positive voicing estimates.
      • False Negative Rate (FNR), the percentage of false negative voicing estimates.
      • F₁, the harmonic mean of precision and recall of the voicing decision.
    • Pipfile is a pipenv-compatible pipfile for installing all prerequisites necessary for running the above Python programs.

    The Python programs take about an hour to compute on a fast 2019 computer, and require at least 32 Gb of memory.

    References:

    1. John Kominek and Alan W Black. CMU ARCTIC database for speech synthesis, 2003.
    2. Paul C Bagshaw, Steven Hiller, and Mervyn A Jack. Enhanced Pitch Tracking and the Processing of F0 Contours for Computer Aided Intonation Teaching. In EUROSPEECH, 1993.
    3. F Plante, Georg F Meyer, and William A Ainsworth. A Pitch Extraction Reference Database. In Fourth European Conference on Speech Communication and Technology, pages 837–840, Madrid, Spain, 1995.
    4. Alan Wrench. MOCHA MultiCHannel Articulatory database: English, November 1999.
    5. Gregor Pirker, Michael Wohlmayr, Stefan Petrik, and Franz Pernkopf. A Pitch Tracking Corpus with Evaluation on Multipitch Tracking Scenario. page 4, 2011.
    6. John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, and Victor Zue. TIMIT Acoustic-Phonetic Continuous Speech Corpus, 1993.
    7. Andrew Varga and Herman J.M. Steeneken. Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recog- nition systems. Speech Communication, 12(3):247–251, July 1993.
    8. David B. Dean, Sridha Sridharan, Robert J. Vogt, and Michael W. Mason. The QUT-NOISE-TIMIT corpus for the evaluation of voice activity detection algorithms. Proceedings of Interspeech 2010, 2010.
    9. Man Mohan Sondhi. New methods of pitch extraction. Audio and Electroacoustics, IEEE Transactions on, 16(2):262—266, 1968.
    10. Myron J. Ross, Harry L. Shaffer, Asaf Cohen, Richard Freudberg, and Harold J. Manley. Average magnitude difference function pitch extractor. Acoustics, Speech and Signal Processing, IEEE Transactions on, 22(5):353—362, 1974.
    11. Na Yang, He Ba, Weiyang Cai, Ilker Demirkol, and Wendi Heinzelman. BaNa: A Noise Resilient Fundamental Frequency Detection Algorithm for Speech and Music. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(12):1833–1848, December 2014.
    12. Michael Noll. Cepstrum Pitch Determination. The Journal of the Acoustical Society of America, 41(2):293–309, 1967.
    13. Jong Wook Kim, Justin Salamon, Peter Li, and Juan Pablo Bello. CREPE: A Convolutional Representation for Pitch Estimation. arXiv:1802.06182 [cs, eess, stat], February 2018. arXiv: 1802.06182.
    14. Masanori Morise, Fumiya Yokomori, and Kenji Ozawa. WORLD: A Vocoder-Based High-Quality Speech Synthesis System for Real-Time Applications. IEICE Transactions on Information and Systems, E99.D(7):1877–1884, 2016.
    15. Kun Han and DeLiang Wang. Neural Network Based Pitch Tracking in Very Noisy Speech. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(12):2158–2168, Decem- ber 2014.
    16. Pegah Ghahremani, Bagher BabaAli, Daniel Povey, Korbinian Riedhammer, Jan Trmal, and Sanjeev Khudanpur. A pitch extraction algorithm tuned for automatic speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 2494–2498. IEEE, 2014.
    17. Lee Ngee Tan and Abeer Alwan. Multi-band summary correlogram-based pitch detection for noisy speech. Speech Communication, 55(7-8):841–856, September 2013.
    18. Jesper Kjær Nielsen, Tobias Lindstrøm Jensen, Jesper Rindom Jensen, Mads Græsbøll Christensen, and Søren Holdt Jensen. Fast fundamental frequency estimation: Making a statistically

  5. Z

    A subsection of England and Wales EPC households, joined with PPD data, used...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Nov 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jenkinson, Ryan (2022). A subsection of England and Wales EPC households, joined with PPD data, used for simulation modelling [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7322966
    Explore at:
    Dataset updated
    Nov 15, 2022
    Dataset provided by
    Chan, Stephanie
    Lopez-Garcia, Daniel
    Jenkinson, Ryan
    Phillips, Tom
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Area covered
    England
    Description

    If you want to give feedback on this dataset, or wish to request it in another form (e.g csv), please fill out this survey here. We are a not-for-profit research organisation keen to see how others use our open models and tools, so all feedback is appreciated! It's a short form that takes 5 minutes to complete.

    Important Note: Before downloading this dataset, please read the License and Software Attribution section at the bottom.

    This dataset aligns with the work published in Centre for Net Zero's report "Hitting the Target". In this work, we simulate a range of interventions to model the situations in which we believe the UK will meet its 600,000 heat pump installation per year target by 2028. For full modelling assumptions and findings, read our report on our website.

    The code for running our simulation is open source here.

    This dataset contains over 9 million households that have been address matched between Energy Performance Certificates (EPC) data and Price Paid Data (PPD). The code for our address matching is here. Since these datasets are Open Government License (OGL), this dataset is too. We basically model specific columns from various datasets, as set out in our methodology section in our report, to simplify and clean up this dataset for academic use. License information is also available in the appendix of our report above.

    The EPC data loaders can be found here (the data is here) and the rest of the schemas and data download locations can be found here.

    Note that this dataset is not regularly maintained or updated. It is correct as of January 2022. The data was curated and tested using dbt via this Github repository and would be simple to rerun on the latest data.

    The schema / data dictionary for this data can be found here.

    Our recommended way of loading this data is in Python. After downloading all "parts" of the dataset to a folder. You can run:

    
    
    import pandas as pd
    
    
    data = pd.read_parquet("path/to/data/folder/")
    
    
    

    Licenses and software attribution:

    For EPC, PPD and UK House Price Index data:

    For the EPC data, we are permitted to republish this providing we mention that all researchers who download this dataset follow these copyright restrictions. We do not explicitly release any Royal Mail address data, instead we use these fields to generate a pseudonymised "address_cluster_id" which reflects a unique combination of the address lines and postcodes, as well as other metadata. When viewing ICO and GDPR guidelines, this still counts as personal data, but we have gone to measures to pseudonymise as much as possible to fulfil our obligations as a data processor. You must read this carefully before downloading the data, and ensure that you are using it for the research purposes as determined by this copyright notice.

    Contains HM Land Registry data © Crown copyright and database right 2021. This data is licensed under the Open Government Licence v3.0.

    Contains OS data © Crown copyright and database right 2022.

    Contains Office for National Statistics data licensed under the Open Government Licence v.3.0.

    The OGL v3.0 license states that we are free to:

    copy, publish, distribute and transmit the Information;

    adapt the Information;

    exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application.

    However we must (where we do any of the above):

    acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence;

    You can see more information here.

    For XOServe Off Gas Postcodes:

    This dataset has been released openly for all uses here.

    For the address matching:

    GNU Parallel: O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014

  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Lignos, Dimitrios G. (2022). Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6965146

Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials

Explore at:
Dataset updated
Dec 24, 2022
Dataset provided by
de Castro e Sousa, Albano
Hartloper, Alexander R.
Lignos, Dimitrios G.
Ozden, Selimcan
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Database of Uniaxial Cyclic and Tensile Coupon Tests for Structural Metallic Materials

Background

This dataset contains data from monotonic and cyclic loading experiments on structural metallic materials. The materials are primarily structural steels and one iron-based shape memory alloy is also included. Summary files are included that provide an overview of the database and data from the individual experiments is also included.

The files included in the database are outlined below and the format of the files is briefly described. Additional information regarding the formatting can be found through the post-processing library (https://github.com/ahartloper/rlmtp/tree/master/protocols).

Usage

The data is licensed through the Creative Commons Attribution 4.0 International.

If you have used our data and are publishing your work, we ask that you please reference both:

this database through its DOI, and

any publication that is associated with the experiments. See the Overall_Summary and Database_References files for the associated publication references.

Included Files

Overall_Summary_2022-08-25_v1-0-0.csv: summarises the specimen information for all experiments in the database.

Summarized_Mechanical_Props_Campaign_2022-08-25_v1-0-0.csv: summarises the average initial yield stress and average initial elastic modulus per campaign.

Unreduced_Data-#_v1-0-0.zip: contain the original (not downsampled) data

Where # is one of: 1, 2, 3, 4, 5, 6. The unreduced data is broken into separate archives because of upload limitations to Zenodo. Together they provide all the experimental data.

We recommend you un-zip all the folders and place them in one "Unreduced_Data" directory similar to the "Clean_Data"

The experimental data is provided through .csv files for each test that contain the processed data. The experiments are organised by experimental campaign and named by load protocol and specimen. A .pdf file accompanies each test showing the stress-strain graph.

There is a "db_tag_clean_data_map.csv" file that is used to map the database summary with the unreduced data.

The computed yield stresses and elastic moduli are stored in the "yield_stress" directory.

Clean_Data_v1-0-0.zip: contains all the downsampled data

The experimental data is provided through .csv files for each test that contain the processed data. The experiments are organised by experimental campaign and named by load protocol and specimen. A .pdf file accompanies each test showing the stress-strain graph.

There is a "db_tag_clean_data_map.csv" file that is used to map the database summary with the clean data.

The computed yield stresses and elastic moduli are stored in the "yield_stress" directory.

Database_References_v1-0-0.bib

Contains a bibtex reference for many of the experiments in the database. Corresponds to the "citekey" entry in the summary files.

File Format: Downsampled Data

These are the "LP_

The header of the first column is empty: the first column corresponds to the index of the sample point in the original (unreduced) data

Time[s]: time in seconds since the start of the test

e_true: true strain

Sigma_true: true stress in MPa

(optional) Temperature[C]: the surface temperature in degC

These data files can be easily loaded using the pandas library in Python through:

import pandas data = pandas.read_csv(data_file, index_col=0)

The data is formatted so it can be used directly in RESSPyLab (https://github.com/AlbanoCastroSousa/RESSPyLab). Note that the column names "e_true" and "Sigma_true" were kept for backwards compatibility reasons with RESSPyLab.

File Format: Unreduced Data

These are the "LP_

The first column is the index of each data point

S/No: sample number recorded by the DAQ

System Date: Date and time of sample

Time[s]: time in seconds since the start of the test

C_1_Force[kN]: load cell force

C_1_Déform1[mm]: extensometer displacement

C_1_Déplacement[mm]: cross-head displacement

Eng_Stress[MPa]: engineering stress

Eng_Strain[]: engineering strain

e_true: true strain

Sigma_true: true stress in MPa

(optional) Temperature[C]: specimen surface temperature in degC

The data can be loaded and used similarly to the downsampled data.

File Format: Overall_Summary

The overall summary file provides data on all the test specimens in the database. The columns include:

hidden_index: internal reference ID

grade: material grade

spec: specifications for the material

source: base material for the test specimen

id: internal name for the specimen

lp: load protocol

size: type of specimen (M8, M12, M20)

gage_length_mm_: unreduced section length in mm

avg_reduced_dia_mm_: average measured diameter for the reduced section in mm

avg_fractured_dia_top_mm_: average measured diameter of the top fracture surface in mm

avg_fractured_dia_bot_mm_: average measured diameter of the bottom fracture surface in mm

fy_n_mpa_: nominal yield stress

fu_n_mpa_: nominal ultimate stress

t_a_deg_c_: ambient temperature in degC

date: date of test

investigator: person(s) who conducted the test

location: laboratory where test was conducted

machine: setup used to conduct test

pid_force_k_p, pid_force_t_i, pid_force_t_d: PID parameters for force control

pid_disp_k_p, pid_disp_t_i, pid_disp_t_d: PID parameters for displacement control

pid_extenso_k_p, pid_extenso_t_i, pid_extenso_t_d: PID parameters for extensometer control

citekey: reference corresponding to the Database_References.bib file

yield_stress_mpa_: computed yield stress in MPa

elastic_modulus_mpa_: computed elastic modulus in MPa

fracture_strain: computed average true strain across the fracture surface

c,si,mn,p,s,n,cu,mo,ni,cr,v,nb,ti,al,b,zr,sn,ca,h,fe: chemical compositions in units of %mass

file: file name of corresponding clean (downsampled) stress-strain data

File Format: Summarized_Mechanical_Props_Campaign

Meant to be loaded in Python as a pandas DataFrame with multi-indexing, e.g.,

tab1 = pd.read_csv('Summarized_Mechanical_Props_Campaign_' + date + version + '.csv', index_col=[0, 1, 2, 3], skipinitialspace=True, header=[0, 1], keep_default_na=False, na_values='')

citekey: reference in "Campaign_References.bib".

Grade: material grade.

Spec.: specifications (e.g., J2+N).

Yield Stress [MPa]: initial yield stress in MPa

size, count, mean, coefvar: number of experiments in campaign, number of experiments in mean, mean value for campaign, coefficient of variation for campaign

Elastic Modulus [MPa]: initial elastic modulus in MPa

size, count, mean, coefvar: number of experiments in campaign, number of experiments in mean, mean value for campaign, coefficient of variation for campaign

Caveats

The files in the following directories were tested before the protocol was established. Therefore, only the true stress-strain is available for each:

A500

A992_Gr50

BCP325

BCR295

HYP400

S460NL

S690QL/25mm

S355J2_Plates/S355J2_N_25mm and S355J2_N_50mm

Search
Clear search
Close search
Google apps
Main menu