100+ datasets found
  1. d

    Data from: Calculated specific conductance using PHREEQCI

    • catalog.data.gov
    • data.usgs.gov
    Updated Oct 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Calculated specific conductance using PHREEQCI [Dataset]. https://catalog.data.gov/dataset/salinity-and-total-dissolved-solid-determinations-using-phreeqci
    Explore at:
    Dataset updated
    Oct 30, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    PHREEQCI is a widely-used geochemical computer program that can be used to calculate chemical speciation and specific conductance of a natural water sample from its chemical composition (Charlton and Parkhurst, 2002; Parkhurst and Appelo, 1999). The specific conductance of a natural water calculated with PHREEQCI (Appelo, 2010) is reliable for pH greater than 4 and temperatures less than 35 °C (McCleskey and others, 2012b). An alternative method for calculating the specific conductance of natural waters is accurate over a large range of ionic strength (0.0004–0.7 mol/kg), pH (1–10), temperature (0–95 °C), and specific conductance (30–70,000 μS/cm) (McCleskey and others, 2012a). PHREEQCI input files for calculating the specific conductance of natural waters using the method described by McCleskey and others (2012a) have been created and are presented in this ScienceBase software release. The input files also incorporate three commonly used temperature compensation factors which can be used to determine the specific conductance at 25 °C: the constant (0.019), the non-linear (ISO-7888), and the temperature compensation factor described by McCleskey (2013) which is the most accurate for acidic waters (pH < 4). The specific conductance imbalance (SCI), which can be used along with charge balance as a quality-control check (McCleskey and others, 2012a), is also calculated: SCI (%) = 100 x (SC25 calculated – SC25 measured) / (SC25 measured) where SC25 calculated is the calculated specific conductance at 25 °C and SC25 measured is the measured specific conductance at 25 °C. Finally, the transport number (t), which is the relative contribution of a given ion to the overall electrical conductivity, for 30 ions is also calculated. Transport numbers are useful for interpreting specific conductance data and identify the ions that substantially contribute to the specific conductance. References Cited Appelo, C. A. J. 2017. Specific conductance: how to calculate, to use, and the pitfalls, [http://www.hydrochemistry.eu/exmpls/sc.html] Ball, J.W., and Nordstrom, D.K., 1991, User's manual for WATEQ4F, with revised thermodynamic data base and test cases for calculating speciation of major, trace, and redox elements in natural waters: U.S. Geological Survey Open-File Report 91-0183, p. 193. Charlton, S.R., and Parkhurst, D.L., 2002, PhreeqcI--A graphical user interface to the geochemical model PHREEQC: U.S. Geological Survey Fact Sheet FS-031-02, 2 p. McCleskey, R.B., Nordstrom, D.K., Ryan, J.N., and Ball, J.W., 2012a, A New Method of Calculating Electrical Conductivity With Applications to Natural Waters: Geochimica et Cosmochimica Acta, v. 77, p. 369-382. [http://www.sciencedirect.com/science/article/pii/S0016703711006181] McCleskey, R.B., Nordstrom, D.K., and Ryan, J.N. 2012b, Comparison of electrical conductivity calculation methods for natural waters. Limnology and Oceanography: Methods, v.10, p 952-967. [http://aslo.org/lomethods/free/2012/0952.html] McCleskey, R.B., 2013, New Method for Electrical Conductivity Temperature Compensation: Environmental Science & Technology, v. 47, p. 9874-9881. [http://dx.doi.org/10.1021/es402188r] Parkhurst, D.L., and Appelo, C.A.J., 1999, User's guide to PHREEQC (Version 2)--a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations: U.S. Geological Survey Water- Resources Investigations Report 99-4259, 312 p.

  2. Data for Calculating Efficient Outdoor Water Uses

    • data.cnra.ca.gov
    • data.ca.gov
    • +2more
    csv, xls, xlsx
    Updated Nov 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Water Resources (2025). Data for Calculating Efficient Outdoor Water Uses [Dataset]. https://data.cnra.ca.gov/dataset/dwr-urban-water-use-objective-data
    Explore at:
    csv(30313), csv(31935), xls(53207), xls(67217), csv(27393), xlsx(34948), xls(67784), csv(31020), csv(27585), xlsx(40203), csv(25852), xlsx(50988), xlsx(36455), xls(52009), csv(27362), csv(43749)Available download formats
    Dataset updated
    Nov 3, 2025
    Dataset authored and provided by
    California Department of Water Resourceshttp://www.water.ca.gov/
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    October 31, 2025 (Final DWR Data)

    The 2018 Legislation required DWR to provide or otherwise identify data regarding the unique local conditions to support the calculation of an urban water use objective (CWC 10609. (b)(2) (C)). The urban water use objective (UWUO) is an estimate of aggregate efficient water use for the previous year based on adopted water use efficiency standards and local service area characteristics for that year.

    UWUO is calculated as the sum of efficient indoor residential water use, efficient outdoor residential water use, efficient outdoor irrigation of landscape areas with dedicated irrigation meter for Commercial, Industrial, and Institutional (CII) water use, efficient water losses, and an estimated water use in accordance with variances, as appropriate. Details of urban water use objective calculations can be obtained from DWR’s Recommendations for Guidelines and Methodologies document (Recommendations for Guidelines and Methodologies for Calculating Urban Water Use Objective - https://water.ca.gov/-/media/DWR-Website/Web-Pages/Programs/Water-Use-And-Efficiency/2018-Water-Conservation-Legislation/Performance-Measures/UWUO_GM_WUES-DWR-2021-01B_COMPLETE.pdf).

    The datasets provided in the links below enable urban retail water suppliers calculate efficient outdoor water uses (both residential and CII), agricultural variances, variances for significant uses of water for dust control for horse corals, and temporary provisions for water use for existing pools (as stated in Water Boards’ draft regulation). DWR will provide technical assistance for estimating the remaining UWUO components, as needed. Data for calculating outdoor water uses include:

    • Reference evapotranspiration (ETo) – ETo is evaporation plant and soil surface plus transpiration through the leaves of standardized grass surfaces over which weather stations stand. Standardization of the surfaces is required because evapotranspiration (ET) depends on combinations of several factors, making it impractical to take measurements under all sets of conditions. Plant factors, known as crop coefficients (Kc) or landscape coefficients (KL), are used to convert ETo to actual water use by specific crop/plant. The ETo data that DWR provides to urban retail water suppliers for urban water use objective calculation purposes is derived from the California Irrigation Management Information System (CIMIS) program (https://cimis.water.ca.gov/). CIMIS is a network of over 150 automated weather stations throughout the state that measure weather data that are used to estimate ETo. CIMIS also provides daily maps of ETo at 2-km grid using the Spatial CIMIS modeling approach that couples satellite data with point measurements. The ETo data provided below for each urban retail water supplier is an area weighted average value from the Spatial CIMIS ETo.

    • Effective precipitation (Peff) - Peff is the portion of total precipitation which becomes available for plant growth. Peff is affected by soil type, slope, land cover type, and intensity and duration of rainfall. DWR is using a soil water balance model, known as Cal-SIMETAW, to estimate daily Peff at 4-km grid and an area weighted average value is calculated at the service area level. Cal-SIMETAW is a model that was developed by UC Davis and DWR and it is widely used to quantify agricultural, and to some extent urban, water uses for the publication of DWR’s Water Plan Update. Peff from Cal-SIMETAW is capped at 25% of total precipitation to account for potential uncertainties in its estimation. Daily Peff at each grid point is aggregated to produce weighted average annual or seasonal Peff at the service area level. The total precipitation that Cal-SIMETAW uses to estimate Peff comes from the Parameter-elevation Regressions on Independent Slopes Model (PRISM), which is a climate mapping model developed by the PRISM Climate Group at Oregon State University.

    • Residential Landscape Area Measurement (LAM) – The 2018 Legislation required DWR to provide each urban retail water supplier with data regarding the area of residential irrigable lands in a manner that can reasonably be applied to the standards (CWC 10609.6.(b)). DWR delivered the LAM data to all retail water suppliers, and a tabular summary of selected data types will be provided here. The data summary that is provided in this file contains irrigable-irrigated (II), irrigable-not-irrigated (INI), and not irrigable (NI) irrigation status classes, as well as horse corral areas (HCL_area), agricultural areas (Ag_area), and pool areas (Pool_area) for all retail suppliers.

  3. d

    Data from: Best Management Practices Statistical Estimator (BMPSE) Version...

    • catalog.data.gov
    • data.usgs.gov
    Updated Nov 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Best Management Practices Statistical Estimator (BMPSE) Version 1.2.0 [Dataset]. https://catalog.data.gov/dataset/best-management-practices-statistical-estimator-bmpse-version-1-2-0
    Explore at:
    Dataset updated
    Nov 27, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The Best Management Practices Statistical Estimator (BMPSE) version 1.2.0 was developed by the U.S. Geological Survey (USGS), in cooperation with the Federal Highway Administration (FHWA) Office of Project Delivery and Environmental Review to provide planning-level information about the performance of structural best management practices for decision makers, planners, and highway engineers to assess and mitigate possible adverse effects of highway and urban runoff on the Nation's receiving waters (Granato 2013, 2014; Granato and others, 2021). The BMPSE was assembled by using a Microsoft Access® database application to facilitate calculation of BMP performance statistics. Granato (2014) developed quantitative methods to estimate values of the trapezoidal-distribution statistics, correlation coefficients, and the minimum irreducible concentration (MIC) from available data. Granato (2014) developed the BMPSE to hold and process data from the International Stormwater Best Management Practices Database (BMPDB, www.bmpdatabase.org). Version 1.0 of the BMPSE contained a subset of the data from the 2012 version of the BMPDB; the current version of the BMPSE (1.2.0) contains a subset of the data from the December 2019 version of the BMPDB. Selected data from the BMPDB were screened for import into the BMPSE in consultation with Jane Clary, the data manager for the BMPDB. Modifications included identifying water quality constituents, making measurement units consistent, identifying paired inflow and outflow values, and converting BMPDB water quality values set as half the detection limit back to the detection limit. Total polycyclic aromatic hydrocarbons (PAH) values were added to the BMPSE from BMPDB data; they were calculated from individual PAH measurements at sites with enough data to calculate totals. The BMPSE tool can sort and rank the data, calculate plotting positions, calculate initial estimates, and calculate potential correlations to facilitate the distribution-fitting process (Granato, 2014). For water-quality ratio analysis the BMPSE generates the input files and the list of filenames for each constituent within the Graphical User Interface (GUI). The BMPSE calculates the Spearman’s rho (ρ) and Kendall’s tau (τ) correlation coefficients with their respective 95-percent confidence limits and the probability that each correlation coefficient value is not significantly different from zero by using standard methods (Granato, 2014). If the 95-percent confidence limit values are of the same sign, then the correlation coefficient is statistically different from zero. For hydrograph extension, the BMPSE calculates ρ and τ between the inflow volume and the hydrograph-extension values (Granato, 2014). For volume reduction, the BMPSE calculates ρ and τ between the inflow volume and the ratio of outflow to inflow volumes (Granato, 2014). For water-quality treatment, the BMPSE calculates ρ and τ between the inflow concentrations and the ratio of outflow to inflow concentrations (Granato, 2014; 2020). The BMPSE also calculates ρ between the inflow and the outflow concentrations when a water-quality treatment analysis is done. The current version (1.2.0) of the BMPSE also has the option to calculate urban-runoff quality statistics from inflows to BMPs by using computer code developed for the Highway Runoff Database (Granato and Cazenas, 2009;Granato, 2019). Granato, G.E., 2013, Stochastic empirical loading and dilution model (SELDM) version 1.0.0: U.S. Geological Survey Techniques and Methods, book 4, chap. C3, 112 p., CD-ROM https://pubs.usgs.gov/tm/04/c03 Granato, G.E., 2014, Statistics for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater runoff best management practices (BMPs): U.S. Geological Survey Scientific Investigations Report 2014–5037, 37 p., http://dx.doi.org/10.3133/sir20145037. Granato, G.E., 2019, Highway-Runoff Database (HRDB) Version 1.1.0: U.S. Geological Survey data release, https://doi.org/10.5066/P94VL32J. Granato, G.E., and Cazenas, P.A., 2009, Highway-Runoff Database (HRDB Version 1.0)--A data warehouse and preprocessor for the stochastic empirical loading and dilution model: Washington, D.C., U.S. Department of Transportation, Federal Highway Administration, FHWA-HEP-09-004, 57 p. https://pubs.usgs.gov/sir/2009/5269/disc_content_100a_web/FHWA-HEP-09-004.pdf Granato, G.E., Spaetzel, A.B., and Medalie, L., 2021, Statistical methods for simulating structural stormwater runoff best management practices (BMPs) with the stochastic empirical loading and dilution model (SELDM): U.S. Geological Survey Scientific Investigations Report 2020–5136, 41 p., https://doi.org/10.3133/sir20205136

  4. Using Descriptive Statistics to Analyse Data in R

    • kaggle.com
    zip
    Updated May 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enrico68 (2024). Using Descriptive Statistics to Analyse Data in R [Dataset]. https://www.kaggle.com/datasets/enrico68/using-descriptive-statistics-to-analyse-data-in-r
    Explore at:
    zip(105561 bytes)Available download formats
    Dataset updated
    May 9, 2024
    Authors
    Enrico68
    Description

    Load and view a real-world dataset in RStudio

    • Calculate “Measure of Frequency” metrics

    • Calculate “Measure of Central Tendency” metrics

    • Calculate “Measure of Dispersion” metrics

    • Use R’s in-built functions for additional data quality metrics

    • Create a custom R function to calculate descriptive statistics on any given dataset

  5. Theory versus Data: How to Calculate R0?

    • plos.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Romulus Breban; Raffaele Vardavas; Sally Blower (2023). Theory versus Data: How to Calculate R0? [Dataset]. http://doi.org/10.1371/journal.pone.0000282
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Romulus Breban; Raffaele Vardavas; Sally Blower
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To predict the potential severity of outbreaks of infectious diseases such as SARS, HIV, TB and smallpox, a summary parameter, the basic reproduction number R0, is generally calculated from a population-level model. R0 specifies the average number of secondary infections caused by one infected individual during his/her entire infectious period at the start of an outbreak. R0 is used to assess the severity of the outbreak, as well as the strength of the medical and/or behavioral interventions necessary for control. Conventionally, it is assumed that if R0>1 the outbreak generates an epidemic, and if R0

  6. Calculate Wind Fetch

    • data.csiro.au
    Updated Jul 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Blake Seers (2021). Calculate Wind Fetch [Dataset]. https://data.csiro.au/collection/csiro:51733
    Explore at:
    Dataset updated
    Jul 30, 2021
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Blake Seers
    License

    https://data.csiro.au/dap/ws/v2/licences/1161https://data.csiro.au/dap/ws/v2/licences/1161

    Dataset funded by
    CSIROhttp://www.csiro.au/
    Description

    Wind fetch is an important measurement in coastal applications. It provides a measurement for the unobstructed length of water over which wind from a certain direction can blow over. The higher the wind fetch from a certain direction, the more energy is imparted onto the surface of the water resulting in a larger sea state. Therefore, the larger the fetch, the larger the exposure to wind and the more likely the site experiences larger sea states. This application calculates wind fetch for any site around the globe. Lineage: This shiny application uses the windfetch R package.

  7. d

    Data from: Simpson's paradox and calculation of number needed to treat from...

    • catalog.data.gov
    • odgavaprod.ogopendata.com
    • +1more
    Updated Sep 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (2025). Simpson's paradox and calculation of number needed to treat from meta-analysis [Dataset]. https://catalog.data.gov/dataset/simpsons-paradox-and-calculation-of-number-needed-to-treat-from-meta-analysis
    Explore at:
    Dataset updated
    Sep 7, 2025
    Dataset provided by
    National Institutes of Health
    Description

    Background Calculation of numbers needed to treat (NNT) is more complex from meta-analysis than from single trials. Treating the data as if it all came from one trial may lead to misleading results when the trial arms are imbalanced. Discussion An example is shown from a published Cochrane review in which the benefit of nursing intervention for smoking cessation is shown by formal meta-analysis of the individual trial results. However if these patients were added together as if they all came from one trial the direction of the effect appears to be reversed (due to Simpson's paradox). Whilst NNT from meta-analysis can be calculated from pooled Risk Differences, this is unlikely to be a stable method unless the event rates in the control groups are very similar. Since in practice event rates vary considerably, the use a relative measure, such as Odds Ratio or Relative Risk is advocated. These can be applied to different levels of baseline risk to generate a risk specific NNT for the treatment. Summary The method used to calculate NNT from meta-analysis should be clearly stated, and adding the patients from separate trials as if they all came from one trial should be avoided.

  8. Super_store//BiStartX

    • kaggle.com
    zip
    Updated Mar 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Kehinde (2025). Super_store//BiStartX [Dataset]. https://www.kaggle.com/datasets/christopherkehinde/super-storebistartx/discussion
    Explore at:
    zip(1080881 bytes)Available download formats
    Dataset updated
    Mar 19, 2025
    Authors
    Christopher Kehinde
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The dataset includes the following relevant columns for sales analysis: Order Date (for monthly analysis) Sales (for revenue calculation) Order ID (to determine the number of orders and average order size) I then processed the data to: Convert dates into a proper format. Aggregate sales data monthly. Calculate total revenue and average order size per month. Visualize trends. Here are the key sales metrics computed: Total Revenue: Sum of sales per month Total Orders: Number of unique orders per month Total Items Sold: Number of individual items sold Average Order Size: Revenue per order

  9. F

    Dataset for dissipation calculation

    • data.uni-hannover.de
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hannoversches Zentrum für Optische Technologien (2024). Dataset for dissipation calculation [Dataset]. https://data.uni-hannover.de/dataset/dataset-for-dissipation-calculation
    Explore at:
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    Hannoversches Zentrum für Optische Technologien
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the data to calculate the spatial distribution of the dissipation as well as the absorption efficiencies of both Gold and Silicon designs, as presented in the article "Time-domain topology optimization of power dissipation in dispersive dielectric and plasmonic nanostructures". This includes the electric field distribution in 3D for multiple wavelengths (netCDF), the final density (netCDF), the design (STL) and material and simulation parameters (JSON) used in the optimization. The evaluation of this data can be performed using the code published on https://github.com/JoGed/dissipation-calculation

  10. Superstore Sales Analysis

    • kaggle.com
    zip
    Updated Oct 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Reda Elblgihy (2023). Superstore Sales Analysis [Dataset]. https://www.kaggle.com/datasets/aliredaelblgihy/superstore-sales-analysis/versions/1
    Explore at:
    zip(3009057 bytes)Available download formats
    Dataset updated
    Oct 21, 2023
    Authors
    Ali Reda Elblgihy
    Description

    Analyzing sales data is essential for any business looking to make informed decisions and optimize its operations. In this project, we will utilize Microsoft Excel and Power Query to conduct a comprehensive analysis of Superstore sales data. Our primary objectives will be to establish meaningful connections between various data sheets, ensure data quality, and calculate critical metrics such as the Cost of Goods Sold (COGS) and discount values. Below are the key steps and elements of this analysis:

    1- Data Import and Transformation:

    • Gather and import relevant sales data from various sources into Excel.
    • Utilize Power Query to clean, transform, and structure the data for analysis.
    • Merge and link different data sheets to create a cohesive dataset, ensuring that all data fields are connected logically.

    2- Data Quality Assessment:

    • Perform data quality checks to identify and address issues like missing values, duplicates, outliers, and data inconsistencies.
    • Standardize data formats and ensure that all data is in a consistent, usable state.

    3- Calculating COGS:

    • Determine the Cost of Goods Sold (COGS) for each product sold by considering factors like purchase price, shipping costs, and any additional expenses.
    • Apply appropriate formulas and calculations to determine COGS accurately.

    4- Discount Analysis:

    • Analyze the discount values offered on products to understand their impact on sales and profitability.
    • Calculate the average discount percentage, identify trends, and visualize the data using charts or graphs.

    5- Sales Metrics:

    • Calculate and analyze various sales metrics, such as total revenue, profit margins, and sales growth.
    • Utilize Excel functions to compute these metrics and create visuals for better insights.

    6- Visualization:

    • Create visualizations, such as charts, graphs, and pivot tables, to present the data in an understandable and actionable format.
    • Visual representations can help identify trends, outliers, and patterns in the data.

    7- Report Generation:

    • Compile the findings and insights into a well-structured report or dashboard, making it easy for stakeholders to understand and make informed decisions.

    Throughout this analysis, the goal is to provide a clear and comprehensive understanding of the Superstore's sales performance. By using Excel and Power Query, we can efficiently manage and analyze the data, ensuring that the insights gained contribute to the store's growth and success.

  11. f

    Data from: Mass Spectrometry Adduct Calculator

    • acs.figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Madison R. Blumer; Christine H. Chang; Evangelina Brayfindley; Jamie R. Nunez; Sean M. Colby; Ryan S. Renslow; Thomas O. Metz (2023). Mass Spectrometry Adduct Calculator [Dataset]. http://doi.org/10.1021/acs.jcim.1c00579.s001
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    ACS Publications
    Authors
    Madison R. Blumer; Christine H. Chang; Evangelina Brayfindley; Jamie R. Nunez; Sean M. Colby; Ryan S. Renslow; Thomas O. Metz
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    We describe the Mass Spectrometry Adduct Calculator (MSAC), an automated Python tool to calculate the adduct ion masses of a parent molecule. Here, adduct refers to a version of a parent molecule [M] that is charged due to addition or loss of atoms and electrons resulting in a charged ion, for example, [M + H]+. MSAC includes a database of 147 potential adducts and adduct/neutral loss combinations and their mass-to-charge ratios (m/z) as extracted from the NIST/EPA/NIH Mass Spectral Library (NIST17), Global Natural Products Social Molecular Networking Public Spectral Libraries (GNPS), and MassBank of North America (MoNA). The calculator relies on user-selected subsets of the combined database to calculate expected m/z for adducts of molecules supplied as formulas. This tool is intended to help researchers create identification libraries to collect evidence for the presence of molecules in mass spectrometry data. While the included adduct database focuses on adducts typically detected during liquid chromatography–mass spectrometry analyses, users may supply their own lists of adducts and charge states for calculating expected m/z. We also analyzed statistics on adducts from spectra contained in the three selected mass spectral libraries. MSAC is freely available at https://github.com/pnnl/MSAC.

  12. c

    Research data supporting 'How to Calculate Embodied Carbon'

    • repository.cam.ac.uk
    txt, xlsx
    Updated Jun 16, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Orr, John; Gibbons, Orlando; Arnold, Will (2020). Research data supporting 'How to Calculate Embodied Carbon' [Dataset]. http://doi.org/10.17863/CAM.53699
    Explore at:
    txt(3519 bytes), xlsx(191936 bytes)Available download formats
    Dataset updated
    Jun 16, 2020
    Dataset provided by
    Apollo
    University of Cambridge
    Authors
    Orr, John; Gibbons, Orlando; Arnold, Will
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Excel spreadsheet for example calculation shown in the publication. See the Readme.txt file for a detailed description.

  13. U

    Data Used to Calculate Flow Logs for Wells in the Wilson Creek and Crafton...

    • data.usgs.gov
    • s.cnmilf.com
    • +1more
    Updated May 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gregory Mendez; Alex Fiore (2024). Data Used to Calculate Flow Logs for Wells in the Wilson Creek and Crafton Subareas, Yucaipa, California, 2004 [Dataset]. http://doi.org/10.5066/P9FU6LA0
    Explore at:
    Dataset updated
    May 13, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Gregory Mendez; Alex Fiore
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Apr 21, 2004 - Apr 23, 2004
    Area covered
    Yucaipa, California
    Description

    This dataset contains velocity and flow log data collected in 2004 for two Yucaipa Valley Water District (YVWD) public-supply wells, YVWD 55 and YVWD 56. Data were collected using the tracer-pulse method described in Izbicki and others (1999), in which a pulse of a rhodamine dye tracer is injected to a known depth in the well and the travel time of the tracer to a detector on the surface is measured. Velocity and cumulative flow are calculated from the dye-arrival times using methods described by Izbicki and others (1999). Flow for well YVWD 55 was calculated using the pump within the screened interval, which captured flow from above and below the pump intake. Flow for well YVWD 56 was calculated using the pump above the screened interval and all flow was upward. Included in this release are two tables with dye-arrival times, two tables with velocity- and cumulative-flow log calculations, and a table of relevant well properties. These data support the interpretations and conclusio ...

  14. f

    The raw data used to calculate the values in Table 1.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    • +1more
    Updated Oct 9, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Delgado, Francisco Feijó; Manalis, Scott R.; Higgins, John M.; Malka, Roy (2014). The raw data used to calculate the values in Table 1. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001193999
    Explore at:
    Dataset updated
    Oct 9, 2014
    Authors
    Delgado, Francisco Feijó; Manalis, Scott R.; Higgins, John M.; Malka, Roy
    Description

    The raw data used to calculate the values in Table 1.

  15. e

    Geodata for noise calculations

    • data.europa.eu
    Updated Oct 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Geodata for noise calculations [Dataset]. https://data.europa.eu/data/datasets/50e32ee9-c5b3-4c26-b7c9-4fcbcda41396/
    Explore at:
    Dataset updated
    Oct 14, 2025
    Description

    The data product Geographical data for noise calculations is a compilation of different data sets for use in noise calculation programs. The data in this data product consists of datasets containing buildings, elevation data, preschools, hard surfaces and road surfaces. For noise calculations in general, track data, road data, elevation data, noise protection data and traffic data are used. Data is aggregated and homogenized. The data product is an extract of data defined by the boundary of an investigation area, i.e. geographical coverage is determined by the current need in a particular noise investigation. The datasets are generated when ordering from, for example, a noise investigation. The data product Geodata for noise calculations is a processed compilation of basic data from different producers for use in noise calculation programs. The data product includes datasets with buildings, elevation data, preschools, hard surfaces and road surfaces. In a noise calculation for a road or railway section, Geodata needs to be supplemented with one of the data products Road data for noise calculations or Railway data for noise calculations. The geographical extent of the data product is determined by the area of investigation in question in a particular noise survey. The datasets are generated when ordering in connection with a noise investigation. The purpose of the data product is to provide standardised data for noise calculations, which in turn provides more efficient and safer handling of information in noise investigations. In the long term, a standardised basis is expected to increase comparability between different noise investigations. Road, rail and traffic data aim to describe the noise source while buildings, topography and hard surfaces are used to calculate how the sound propagates in the surroundings.

  16. T

    Calculation of evaporation data for three typical inland lakes on the Tibet...

    • data.tpdc.ac.cn
    • tpdc.ac.cn
    zip
    Updated Jan 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weiyao MA; Weiqiang* MA; Ling BAI; Jianan HE; Lele SHI; Longtengfei MA; Yaoming MA (2024). Calculation of evaporation data for three typical inland lakes on the Tibet Plateau based on Water Balance Formula [Dataset]. http://doi.org/10.11888/Terre.tpdc.301032
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 30, 2024
    Dataset provided by
    TPDC
    Authors
    Weiyao MA; Weiqiang* MA; Ling BAI; Jianan HE; Lele SHI; Longtengfei MA; Yaoming MA
    Area covered
    Description

    1) Data content Ice-free evaporation data of three typical mesoscale inland lakes (Bamu Co, Langa Co and Longmu Co) from 2019 to 2023. The location of the lake: Bamu Co (90.59°E, 31.29°N), Langa Co (81.24°E, 30.72°N), Longmu Co (80.47°E, 34.60°N). Time resolution: 1d; 1 month Unit: mm 2) Data calculation method Water balance method. The calculation formula is as follows: E = P + R - ΔV (1) Where E is evaporation, P is precipitation, R is runoff discharge, and ΔV is the change of lake water volume. All the data used were calculated by in-situ observation data and satellite remote sensing data. Where P uses lakeside rain bucket data and GPM data; R Using radar current meters at major runoff locations, periodic manual flow measurements, and ERA-GloFAS datasets; ΔV uses the data of automatic water level meter 1 meter from the lake to obtain the water level change, and the monthly area change obtained from Sentinel-2 data can be calculated. The calculation formula is as follows: ΔV=1/3 (H_2-H_1)(A_1+A_2+√(A_1×A_2)) (2) ΔV=(H_2-H_1)A (3) H1, H2, A1 and A2 are the water level and lake area in different periods, respectively. When calculating the monthly change of water quantity, formula (2) is used. When calculating the daily change of water quantity, the lake area A of the current month is used because the daily change of lake area is small and can be ignored. 3) Data quality description The water balance method requires a large number of meteorological and hydrological observation parameters, especially for the observation of runoff recharge, there are many uncertain factors. In this dataset, the recharge amount of underground runoff is not taken into account, and the surface runoff recharge is also difficult to observe. Therefore, the dataset obtained by this method needs to be further updated with the increase of observation data, so as to obtain more accurate data as far as possible. 4) Data application achievements and prospects Surface evaporation is an important link in the water cycle and an important subject in hydrology. The advantage of using water balance method to calculate evaporation is that it can be applied under any weather conditions without being restricted by many conditions in micrometeorology. Under the condition of sufficient and reliable data, using water balance method to calculate evaporation can produce data with high precision. The more accurate evaporation amount obtained from the observed data is an important link in the study of lake water variation. By obtaining the evaporation amount of three lakes in different climate zones, the variation law of lake surface evaporation in different climate zones can be better explored. See the file for specific data content

  17. Intermediate data for TE calculation

    • zenodo.org
    bin, csv
    Updated May 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yue Liu; Yue Liu (2025). Intermediate data for TE calculation [Dataset]. http://doi.org/10.5281/zenodo.10373032
    Explore at:
    csv, binAvailable download formats
    Dataset updated
    May 9, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yue Liu; Yue Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset includes intermediate data from RiboBase that generates translation efficiency (TE). The code to generate the files can be found at https://github.com/CenikLab/TE_model.

    We uploaded demo HeLa .ribo files, but due to the large storage requirements of the full dataset, I recommend contacting Dr. Can Cenik directly to request access to the complete version of RiboBase if you need the original data.

    The detailed explanation for each file:

    human_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in human.

    human_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in human.

    human_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in human.

    human_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in human.

    human_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in human.

    human_TE_rho.rda: TE proportional similarity data as genes by genes matrix in human.

    mouse_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in mouse.

    mouse_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in mouse.

    mouse_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in mouse.

    mouse_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in mouse.

    mouse_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in mouse.

    mouse_TE_rho.rda: TE proportional similarity data as genes by genes matrix in mouse.

    All the data was passed quality control. There are 1054 mouse samples and 835 mouse samples:
    * coverage > 0.1 X
    * CDS percentage > 70%
    * R2 between RNA and RIBO >= 0.188 (remove outliers)

    All ribosome profiling data here is non-dedup winsorizing data paired with RNA-seq dedup data without winsorizing (even though it names as flatten, it just the same format of the naming)

    ####code
    If you need to read rda data please use load("rdaname.rda") with R

    If you need to calculate proportional similarity from clr data:
    library(propr)
    human_TE_homo_rho <- propr:::lr2rho(as.matrix(clr_data))
    rownames(human_TE_homo_rho) <- colnames(human_TE_homo_rho) <- rownames(clr_data)

  18. Data Set for the Development and Testing of the MC23 Nonclassical-Energy...

    • zenodo.org
    xz
    Updated Sep 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jie J. Bao; Jie J. Bao; Dayou Zhang; Dayou Zhang; Shaoting Zhang; Laura Gagliardi; Laura Gagliardi; Donald G. Truhlar; Donald G. Truhlar; Shaoting Zhang (2024). Data Set for the Development and Testing of the MC23 Nonclassical-Energy Functional [Dataset]. http://doi.org/10.5281/zenodo.10724676
    Explore at:
    xzAvailable download formats
    Dataset updated
    Sep 20, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jie J. Bao; Jie J. Bao; Dayou Zhang; Dayou Zhang; Shaoting Zhang; Laura Gagliardi; Laura Gagliardi; Donald G. Truhlar; Donald G. Truhlar; Shaoting Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Sep 20, 2024
    Description

    This dataset contains files used to train and test the Multi-Configuration 23 (MC23) functional and to compare the results to other methods. It includes files to carry out electronic structure calculations. These include molecular geometries in xyz format, OpenMolcas input files for CASSCF calculations, converged CASSCF natural orbitals, OpenMolcas basis set files, and Gaussian 16 formatted checkpoint files for KS-DFT calculations. It also includes data used for data processing such as stoichiometries, absolute energies, and reference energies.

    Each file in this dataset is a .tar.xz archive. One can extract them by the following command:

    tar -xJf name_of_archive.tar.xz

    Below is a description of the content of each archive.

    gaussian_16_fchk.tar.xz contains Gaussian 16 formatted checkpoint files for all KS-DFT calculations used in this work. The files in the archive are named as functional/database/system.fchk

    openmolcas_basis_set.tar.xz contains OpenMolcas basis set files used for multireference calculations. To reproduce the results in this work, the basis set files should be placed in the “basis_library” directory in the OpenMolcas installation location.

    openmolcas_wave_function.tar.xz contains files needed by OpenMolcas to reproduce the CASSCF wave function used in this work. The files in the archive are named database/system.*.

    • The file system.xyz contains the Cartesian coordinates. Note that for Data Set 2, the coordinates are in the input files system.inp.
    • The file system.inp contains the OpenMolcas input file to perform CASSCF calculations.
    • The files system.RasOrb, system.rasscf.h5, and system.rasscf.molden contain the converged CASSCF natural orbitals.

    gaussian_16_stoichiometry_energy.tar.xz and openmolcas_stoichiometry_energy.tar.xz contain files used for data processing.

    • Files with names like database.ref contain information used to calculate the final energies and errors. They are tab-delimited files. Each row represents an energy difference (e.g. atomization energy, barrier height, etc.). The first column contains the name of the energy difference (note: spaces may be present in this column). This is followed by the file names of each electronic structure calculation and the stoichiometries used to calculate the energy difference from the absolute energies. Each name or stoichiometry occupies one column. The second from the last column contains the reference value in kcal/mol. The reference values contain spin–orbit coupling. The last column contains the factor by which the final energy should be divided. This factor usually equals 1, but it can be greater than 1 for databases calculating atomization energies per bond or per atom.
    • Files with names like method.elist contain the absolute energies of each electronic structure calculation. They are tab-delimited files. Each row represents an electronic structure calculation, and each row always contains two columns. The first column is the file name of the calculation in the format database/system. The second column is the absolute energy in atomic units extracted from the output file of electronic structure programs.
    • The file named SOC.dat contains the spin–orbit coupling term in kcal/mol to be added to each electronic structure calculation prior to calculating energy differences. It has the same format as files with names like method.elist.

    The database names in the directory names use a slightly different convention than the ones in the article describing MC23. A prefix DS2_ or DS3_ is used to indicate the data set to which a database belongs, and the number of data points is removed from the database name. For example, the MR-MGN-BE8 database from Data Set 2 has a file name DS2_MR-MGN-BE.

  19. d

    Data from USGS National Water Quality Laboratory methods used to calculate...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Nov 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Data from USGS National Water Quality Laboratory methods used to calculate and compare detection limits estimated using single- and multi-concentration spike-based and blank-based procedures [Dataset]. https://catalog.data.gov/dataset/data-from-usgs-national-water-quality-laboratory-methods-used-to-calculate-and-compare-det
    Explore at:
    Dataset updated
    Nov 26, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This dataset provides the expected and determined concentrations of selected inorganic and organic analytes for spiked reagent-water samples (calibration standards and limit of quantitation standards) that were used to calculate detection limits by using the United States Environmental Protection Agency’s (USEPA) Method Detection Limit (MDL) version 1.11 or 2.0 procedures, ASTM International’s Within-Laboratory Critical Level standard procedure D7783-13, and, for five pharmaceutical compounds, by USEPA’s Lowest Concentration Minimum Reporting Level procedure. Also provided are determined concentration data for reagent-water laboratory blank samples, classified as either instrument blank or set blank samples, and reagent-water blind-blank samples submitted by the USGS Quality System Branch, that were used to calculate blank-based detection limits by using the USEPA MDL version 2.0 procedure or procedures described in National Water Quality Laboratory Technical Memorandum 2016.02, http://wwwnwql.cr.usgs.gov/tech_memos/nwql.2016-02.pdf. The determined detection limits are provided and compared in the related external publication at https://doi.org/10.1016/j.talanta.2021.122139.

  20. Medicare Data to Calculate Your Primary Service Areas

    • data.wu.ac.at
    application/unknown
    Updated Apr 4, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Health & Human Services (2018). Medicare Data to Calculate Your Primary Service Areas [Dataset]. https://data.wu.ac.at/odso/data_gov/YmYwN2MwMzgtZDcwMy00MTk0LWIwZjQtMDBlOTkxMTE4ZGFi
    Explore at:
    application/unknownAvailable download formats
    Dataset updated
    Apr 4, 2018
    Dataset provided by
    United States Department of Health and Human Serviceshttp://www.hhs.gov/
    Description

    The following data is being made available to applicants to the Medicare Shared Savings Program (Shared Savings Program), in order to allow them to calculate their share of services in each applicable Primary Service Area (PSA).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2025). Calculated specific conductance using PHREEQCI [Dataset]. https://catalog.data.gov/dataset/salinity-and-total-dissolved-solid-determinations-using-phreeqci

Data from: Calculated specific conductance using PHREEQCI

Related Article
Explore at:
Dataset updated
Oct 30, 2025
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Description

PHREEQCI is a widely-used geochemical computer program that can be used to calculate chemical speciation and specific conductance of a natural water sample from its chemical composition (Charlton and Parkhurst, 2002; Parkhurst and Appelo, 1999). The specific conductance of a natural water calculated with PHREEQCI (Appelo, 2010) is reliable for pH greater than 4 and temperatures less than 35 °C (McCleskey and others, 2012b). An alternative method for calculating the specific conductance of natural waters is accurate over a large range of ionic strength (0.0004–0.7 mol/kg), pH (1–10), temperature (0–95 °C), and specific conductance (30–70,000 μS/cm) (McCleskey and others, 2012a). PHREEQCI input files for calculating the specific conductance of natural waters using the method described by McCleskey and others (2012a) have been created and are presented in this ScienceBase software release. The input files also incorporate three commonly used temperature compensation factors which can be used to determine the specific conductance at 25 °C: the constant (0.019), the non-linear (ISO-7888), and the temperature compensation factor described by McCleskey (2013) which is the most accurate for acidic waters (pH < 4). The specific conductance imbalance (SCI), which can be used along with charge balance as a quality-control check (McCleskey and others, 2012a), is also calculated: SCI (%) = 100 x (SC25 calculated – SC25 measured) / (SC25 measured) where SC25 calculated is the calculated specific conductance at 25 °C and SC25 measured is the measured specific conductance at 25 °C. Finally, the transport number (t), which is the relative contribution of a given ion to the overall electrical conductivity, for 30 ions is also calculated. Transport numbers are useful for interpreting specific conductance data and identify the ions that substantially contribute to the specific conductance. References Cited Appelo, C. A. J. 2017. Specific conductance: how to calculate, to use, and the pitfalls, [http://www.hydrochemistry.eu/exmpls/sc.html] Ball, J.W., and Nordstrom, D.K., 1991, User's manual for WATEQ4F, with revised thermodynamic data base and test cases for calculating speciation of major, trace, and redox elements in natural waters: U.S. Geological Survey Open-File Report 91-0183, p. 193. Charlton, S.R., and Parkhurst, D.L., 2002, PhreeqcI--A graphical user interface to the geochemical model PHREEQC: U.S. Geological Survey Fact Sheet FS-031-02, 2 p. McCleskey, R.B., Nordstrom, D.K., Ryan, J.N., and Ball, J.W., 2012a, A New Method of Calculating Electrical Conductivity With Applications to Natural Waters: Geochimica et Cosmochimica Acta, v. 77, p. 369-382. [http://www.sciencedirect.com/science/article/pii/S0016703711006181] McCleskey, R.B., Nordstrom, D.K., and Ryan, J.N. 2012b, Comparison of electrical conductivity calculation methods for natural waters. Limnology and Oceanography: Methods, v.10, p 952-967. [http://aslo.org/lomethods/free/2012/0952.html] McCleskey, R.B., 2013, New Method for Electrical Conductivity Temperature Compensation: Environmental Science & Technology, v. 47, p. 9874-9881. [http://dx.doi.org/10.1021/es402188r] Parkhurst, D.L., and Appelo, C.A.J., 1999, User's guide to PHREEQC (Version 2)--a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations: U.S. Geological Survey Water- Resources Investigations Report 99-4259, 312 p.

Search
Clear search
Close search
Google apps
Main menu