Facebook
TwitterPHREEQCI is a widely-used geochemical computer program that can be used to calculate chemical speciation and specific conductance of a natural water sample from its chemical composition (Charlton and Parkhurst, 2002; Parkhurst and Appelo, 1999). The specific conductance of a natural water calculated with PHREEQCI (Appelo, 2010) is reliable for pH greater than 4 and temperatures less than 35 °C (McCleskey and others, 2012b). An alternative method for calculating the specific conductance of natural waters is accurate over a large range of ionic strength (0.0004–0.7 mol/kg), pH (1–10), temperature (0–95 °C), and specific conductance (30–70,000 μS/cm) (McCleskey and others, 2012a). PHREEQCI input files for calculating the specific conductance of natural waters using the method described by McCleskey and others (2012a) have been created and are presented in this ScienceBase software release. The input files also incorporate three commonly used temperature compensation factors which can be used to determine the specific conductance at 25 °C: the constant (0.019), the non-linear (ISO-7888), and the temperature compensation factor described by McCleskey (2013) which is the most accurate for acidic waters (pH < 4). The specific conductance imbalance (SCI), which can be used along with charge balance as a quality-control check (McCleskey and others, 2012a), is also calculated: SCI (%) = 100 x (SC25 calculated – SC25 measured) / (SC25 measured) where SC25 calculated is the calculated specific conductance at 25 °C and SC25 measured is the measured specific conductance at 25 °C. Finally, the transport number (t), which is the relative contribution of a given ion to the overall electrical conductivity, for 30 ions is also calculated. Transport numbers are useful for interpreting specific conductance data and identify the ions that substantially contribute to the specific conductance. References Cited Appelo, C. A. J. 2017. Specific conductance: how to calculate, to use, and the pitfalls, [http://www.hydrochemistry.eu/exmpls/sc.html] Ball, J.W., and Nordstrom, D.K., 1991, User's manual for WATEQ4F, with revised thermodynamic data base and test cases for calculating speciation of major, trace, and redox elements in natural waters: U.S. Geological Survey Open-File Report 91-0183, p. 193. Charlton, S.R., and Parkhurst, D.L., 2002, PhreeqcI--A graphical user interface to the geochemical model PHREEQC: U.S. Geological Survey Fact Sheet FS-031-02, 2 p. McCleskey, R.B., Nordstrom, D.K., Ryan, J.N., and Ball, J.W., 2012a, A New Method of Calculating Electrical Conductivity With Applications to Natural Waters: Geochimica et Cosmochimica Acta, v. 77, p. 369-382. [http://www.sciencedirect.com/science/article/pii/S0016703711006181] McCleskey, R.B., Nordstrom, D.K., and Ryan, J.N. 2012b, Comparison of electrical conductivity calculation methods for natural waters. Limnology and Oceanography: Methods, v.10, p 952-967. [http://aslo.org/lomethods/free/2012/0952.html] McCleskey, R.B., 2013, New Method for Electrical Conductivity Temperature Compensation: Environmental Science & Technology, v. 47, p. 9874-9881. [http://dx.doi.org/10.1021/es402188r] Parkhurst, D.L., and Appelo, C.A.J., 1999, User's guide to PHREEQC (Version 2)--a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations: U.S. Geological Survey Water- Resources Investigations Report 99-4259, 312 p.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
October 31, 2025 (Final DWR Data)
The 2018 Legislation required DWR to provide or otherwise identify data regarding the unique local conditions to support the calculation of an urban water use objective (CWC 10609. (b)(2) (C)). The urban water use objective (UWUO) is an estimate of aggregate efficient water use for the previous year based on adopted water use efficiency standards and local service area characteristics for that year.
UWUO is calculated as the sum of efficient indoor residential water use, efficient outdoor residential water use, efficient outdoor irrigation of landscape areas with dedicated irrigation meter for Commercial, Industrial, and Institutional (CII) water use, efficient water losses, and an estimated water use in accordance with variances, as appropriate. Details of urban water use objective calculations can be obtained from DWR’s Recommendations for Guidelines and Methodologies document (Recommendations for Guidelines and Methodologies for Calculating Urban Water Use Objective - https://water.ca.gov/-/media/DWR-Website/Web-Pages/Programs/Water-Use-And-Efficiency/2018-Water-Conservation-Legislation/Performance-Measures/UWUO_GM_WUES-DWR-2021-01B_COMPLETE.pdf).
The datasets provided in the links below enable urban retail water suppliers calculate efficient outdoor water uses (both residential and CII), agricultural variances, variances for significant uses of water for dust control for horse corals, and temporary provisions for water use for existing pools (as stated in Water Boards’ draft regulation). DWR will provide technical assistance for estimating the remaining UWUO components, as needed. Data for calculating outdoor water uses include:
• Reference evapotranspiration (ETo) – ETo is evaporation plant and soil surface plus transpiration through the leaves of standardized grass surfaces over which weather stations stand. Standardization of the surfaces is required because evapotranspiration (ET) depends on combinations of several factors, making it impractical to take measurements under all sets of conditions. Plant factors, known as crop coefficients (Kc) or landscape coefficients (KL), are used to convert ETo to actual water use by specific crop/plant. The ETo data that DWR provides to urban retail water suppliers for urban water use objective calculation purposes is derived from the California Irrigation Management Information System (CIMIS) program (https://cimis.water.ca.gov/). CIMIS is a network of over 150 automated weather stations throughout the state that measure weather data that are used to estimate ETo. CIMIS also provides daily maps of ETo at 2-km grid using the Spatial CIMIS modeling approach that couples satellite data with point measurements. The ETo data provided below for each urban retail water supplier is an area weighted average value from the Spatial CIMIS ETo.
• Effective precipitation (Peff) - Peff is the portion of total precipitation which becomes available for plant growth. Peff is affected by soil type, slope, land cover type, and intensity and duration of rainfall. DWR is using a soil water balance model, known as Cal-SIMETAW, to estimate daily Peff at 4-km grid and an area weighted average value is calculated at the service area level. Cal-SIMETAW is a model that was developed by UC Davis and DWR and it is widely used to quantify agricultural, and to some extent urban, water uses for the publication of DWR’s Water Plan Update. Peff from Cal-SIMETAW is capped at 25% of total precipitation to account for potential uncertainties in its estimation. Daily Peff at each grid point is aggregated to produce weighted average annual or seasonal Peff at the service area level. The total precipitation that Cal-SIMETAW uses to estimate Peff comes from the Parameter-elevation Regressions on Independent Slopes Model (PRISM), which is a climate mapping model developed by the PRISM Climate Group at Oregon State University.
• Residential Landscape Area Measurement (LAM) – The 2018 Legislation required DWR to provide each urban retail water supplier with data regarding the area of residential irrigable lands in a manner that can reasonably be applied to the standards (CWC 10609.6.(b)). DWR delivered the LAM data to all retail water suppliers, and a tabular summary of selected data types will be provided here. The data summary that is provided in this file contains irrigable-irrigated (II), irrigable-not-irrigated (INI), and not irrigable (NI) irrigation status classes, as well as horse corral areas (HCL_area), agricultural areas (Ag_area), and pool areas (Pool_area) for all retail suppliers.
Facebook
TwitterThe Best Management Practices Statistical Estimator (BMPSE) version 1.2.0 was developed by the U.S. Geological Survey (USGS), in cooperation with the Federal Highway Administration (FHWA) Office of Project Delivery and Environmental Review to provide planning-level information about the performance of structural best management practices for decision makers, planners, and highway engineers to assess and mitigate possible adverse effects of highway and urban runoff on the Nation's receiving waters (Granato 2013, 2014; Granato and others, 2021). The BMPSE was assembled by using a Microsoft Access® database application to facilitate calculation of BMP performance statistics. Granato (2014) developed quantitative methods to estimate values of the trapezoidal-distribution statistics, correlation coefficients, and the minimum irreducible concentration (MIC) from available data. Granato (2014) developed the BMPSE to hold and process data from the International Stormwater Best Management Practices Database (BMPDB, www.bmpdatabase.org). Version 1.0 of the BMPSE contained a subset of the data from the 2012 version of the BMPDB; the current version of the BMPSE (1.2.0) contains a subset of the data from the December 2019 version of the BMPDB. Selected data from the BMPDB were screened for import into the BMPSE in consultation with Jane Clary, the data manager for the BMPDB. Modifications included identifying water quality constituents, making measurement units consistent, identifying paired inflow and outflow values, and converting BMPDB water quality values set as half the detection limit back to the detection limit. Total polycyclic aromatic hydrocarbons (PAH) values were added to the BMPSE from BMPDB data; they were calculated from individual PAH measurements at sites with enough data to calculate totals. The BMPSE tool can sort and rank the data, calculate plotting positions, calculate initial estimates, and calculate potential correlations to facilitate the distribution-fitting process (Granato, 2014). For water-quality ratio analysis the BMPSE generates the input files and the list of filenames for each constituent within the Graphical User Interface (GUI). The BMPSE calculates the Spearman’s rho (ρ) and Kendall’s tau (τ) correlation coefficients with their respective 95-percent confidence limits and the probability that each correlation coefficient value is not significantly different from zero by using standard methods (Granato, 2014). If the 95-percent confidence limit values are of the same sign, then the correlation coefficient is statistically different from zero. For hydrograph extension, the BMPSE calculates ρ and τ between the inflow volume and the hydrograph-extension values (Granato, 2014). For volume reduction, the BMPSE calculates ρ and τ between the inflow volume and the ratio of outflow to inflow volumes (Granato, 2014). For water-quality treatment, the BMPSE calculates ρ and τ between the inflow concentrations and the ratio of outflow to inflow concentrations (Granato, 2014; 2020). The BMPSE also calculates ρ between the inflow and the outflow concentrations when a water-quality treatment analysis is done. The current version (1.2.0) of the BMPSE also has the option to calculate urban-runoff quality statistics from inflows to BMPs by using computer code developed for the Highway Runoff Database (Granato and Cazenas, 2009;Granato, 2019). Granato, G.E., 2013, Stochastic empirical loading and dilution model (SELDM) version 1.0.0: U.S. Geological Survey Techniques and Methods, book 4, chap. C3, 112 p., CD-ROM https://pubs.usgs.gov/tm/04/c03 Granato, G.E., 2014, Statistics for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater runoff best management practices (BMPs): U.S. Geological Survey Scientific Investigations Report 2014–5037, 37 p., http://dx.doi.org/10.3133/sir20145037. Granato, G.E., 2019, Highway-Runoff Database (HRDB) Version 1.1.0: U.S. Geological Survey data release, https://doi.org/10.5066/P94VL32J. Granato, G.E., and Cazenas, P.A., 2009, Highway-Runoff Database (HRDB Version 1.0)--A data warehouse and preprocessor for the stochastic empirical loading and dilution model: Washington, D.C., U.S. Department of Transportation, Federal Highway Administration, FHWA-HEP-09-004, 57 p. https://pubs.usgs.gov/sir/2009/5269/disc_content_100a_web/FHWA-HEP-09-004.pdf Granato, G.E., Spaetzel, A.B., and Medalie, L., 2021, Statistical methods for simulating structural stormwater runoff best management practices (BMPs) with the stochastic empirical loading and dilution model (SELDM): U.S. Geological Survey Scientific Investigations Report 2020–5136, 41 p., https://doi.org/10.3133/sir20205136
Facebook
TwitterLoad and view a real-world dataset in RStudio
• Calculate “Measure of Frequency” metrics
• Calculate “Measure of Central Tendency” metrics
• Calculate “Measure of Dispersion” metrics
• Use R’s in-built functions for additional data quality metrics
• Create a custom R function to calculate descriptive statistics on any given dataset
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To predict the potential severity of outbreaks of infectious diseases such as SARS, HIV, TB and smallpox, a summary parameter, the basic reproduction number R0, is generally calculated from a population-level model. R0 specifies the average number of secondary infections caused by one infected individual during his/her entire infectious period at the start of an outbreak. R0 is used to assess the severity of the outbreak, as well as the strength of the medical and/or behavioral interventions necessary for control. Conventionally, it is assumed that if R0>1 the outbreak generates an epidemic, and if R0
Facebook
Twitterhttps://data.csiro.au/dap/ws/v2/licences/1161https://data.csiro.au/dap/ws/v2/licences/1161
Wind fetch is an important measurement in coastal applications. It provides a measurement for the unobstructed length of water over which wind from a certain direction can blow over. The higher the wind fetch from a certain direction, the more energy is imparted onto the surface of the water resulting in a larger sea state. Therefore, the larger the fetch, the larger the exposure to wind and the more likely the site experiences larger sea states. This application calculates wind fetch for any site around the globe. Lineage: This shiny application uses the windfetch R package.
Facebook
TwitterBackground Calculation of numbers needed to treat (NNT) is more complex from meta-analysis than from single trials. Treating the data as if it all came from one trial may lead to misleading results when the trial arms are imbalanced. Discussion An example is shown from a published Cochrane review in which the benefit of nursing intervention for smoking cessation is shown by formal meta-analysis of the individual trial results. However if these patients were added together as if they all came from one trial the direction of the effect appears to be reversed (due to Simpson's paradox). Whilst NNT from meta-analysis can be calculated from pooled Risk Differences, this is unlikely to be a stable method unless the event rates in the control groups are very similar. Since in practice event rates vary considerably, the use a relative measure, such as Odds Ratio or Relative Risk is advocated. These can be applied to different levels of baseline risk to generate a risk specific NNT for the treatment. Summary The method used to calculate NNT from meta-analysis should be clearly stated, and adding the patients from separate trials as if they all came from one trial should be avoided.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset includes the following relevant columns for sales analysis: Order Date (for monthly analysis) Sales (for revenue calculation) Order ID (to determine the number of orders and average order size) I then processed the data to: Convert dates into a proper format. Aggregate sales data monthly. Calculate total revenue and average order size per month. Visualize trends. Here are the key sales metrics computed: Total Revenue: Sum of sales per month Total Orders: Number of unique orders per month Total Items Sold: Number of individual items sold Average Order Size: Revenue per order
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the data to calculate the spatial distribution of the dissipation as well as the absorption efficiencies of both Gold and Silicon designs, as presented in the article "Time-domain topology optimization of power dissipation in dispersive dielectric and plasmonic nanostructures". This includes the electric field distribution in 3D for multiple wavelengths (netCDF), the final density (netCDF), the design (STL) and material and simulation parameters (JSON) used in the optimization. The evaluation of this data can be performed using the code published on https://github.com/JoGed/dissipation-calculation
Facebook
TwitterAnalyzing sales data is essential for any business looking to make informed decisions and optimize its operations. In this project, we will utilize Microsoft Excel and Power Query to conduct a comprehensive analysis of Superstore sales data. Our primary objectives will be to establish meaningful connections between various data sheets, ensure data quality, and calculate critical metrics such as the Cost of Goods Sold (COGS) and discount values. Below are the key steps and elements of this analysis:
1- Data Import and Transformation:
2- Data Quality Assessment:
3- Calculating COGS:
4- Discount Analysis:
5- Sales Metrics:
6- Visualization:
7- Report Generation:
Throughout this analysis, the goal is to provide a clear and comprehensive understanding of the Superstore's sales performance. By using Excel and Power Query, we can efficiently manage and analyze the data, ensuring that the insights gained contribute to the store's growth and success.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
We describe the Mass Spectrometry Adduct Calculator (MSAC), an automated Python tool to calculate the adduct ion masses of a parent molecule. Here, adduct refers to a version of a parent molecule [M] that is charged due to addition or loss of atoms and electrons resulting in a charged ion, for example, [M + H]+. MSAC includes a database of 147 potential adducts and adduct/neutral loss combinations and their mass-to-charge ratios (m/z) as extracted from the NIST/EPA/NIH Mass Spectral Library (NIST17), Global Natural Products Social Molecular Networking Public Spectral Libraries (GNPS), and MassBank of North America (MoNA). The calculator relies on user-selected subsets of the combined database to calculate expected m/z for adducts of molecules supplied as formulas. This tool is intended to help researchers create identification libraries to collect evidence for the presence of molecules in mass spectrometry data. While the included adduct database focuses on adducts typically detected during liquid chromatography–mass spectrometry analyses, users may supply their own lists of adducts and charge states for calculating expected m/z. We also analyzed statistics on adducts from spectra contained in the three selected mass spectral libraries. MSAC is freely available at https://github.com/pnnl/MSAC.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Excel spreadsheet for example calculation shown in the publication. See the Readme.txt file for a detailed description.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset contains velocity and flow log data collected in 2004 for two Yucaipa Valley Water District (YVWD) public-supply wells, YVWD 55 and YVWD 56. Data were collected using the tracer-pulse method described in Izbicki and others (1999), in which a pulse of a rhodamine dye tracer is injected to a known depth in the well and the travel time of the tracer to a detector on the surface is measured. Velocity and cumulative flow are calculated from the dye-arrival times using methods described by Izbicki and others (1999). Flow for well YVWD 55 was calculated using the pump within the screened interval, which captured flow from above and below the pump intake. Flow for well YVWD 56 was calculated using the pump above the screened interval and all flow was upward. Included in this release are two tables with dye-arrival times, two tables with velocity- and cumulative-flow log calculations, and a table of relevant well properties. These data support the interpretations and conclusio ...
Facebook
TwitterThe raw data used to calculate the values in Table 1.
Facebook
TwitterThe data product Geographical data for noise calculations is a compilation of different data sets for use in noise calculation programs. The data in this data product consists of datasets containing buildings, elevation data, preschools, hard surfaces and road surfaces. For noise calculations in general, track data, road data, elevation data, noise protection data and traffic data are used. Data is aggregated and homogenized. The data product is an extract of data defined by the boundary of an investigation area, i.e. geographical coverage is determined by the current need in a particular noise investigation. The datasets are generated when ordering from, for example, a noise investigation. The data product Geodata for noise calculations is a processed compilation of basic data from different producers for use in noise calculation programs. The data product includes datasets with buildings, elevation data, preschools, hard surfaces and road surfaces. In a noise calculation for a road or railway section, Geodata needs to be supplemented with one of the data products Road data for noise calculations or Railway data for noise calculations. The geographical extent of the data product is determined by the area of investigation in question in a particular noise survey. The datasets are generated when ordering in connection with a noise investigation. The purpose of the data product is to provide standardised data for noise calculations, which in turn provides more efficient and safer handling of information in noise investigations. In the long term, a standardised basis is expected to increase comparability between different noise investigations. Road, rail and traffic data aim to describe the noise source while buildings, topography and hard surfaces are used to calculate how the sound propagates in the surroundings.
Facebook
Twitter1) Data content Ice-free evaporation data of three typical mesoscale inland lakes (Bamu Co, Langa Co and Longmu Co) from 2019 to 2023. The location of the lake: Bamu Co (90.59°E, 31.29°N), Langa Co (81.24°E, 30.72°N), Longmu Co (80.47°E, 34.60°N). Time resolution: 1d; 1 month Unit: mm 2) Data calculation method Water balance method. The calculation formula is as follows: E = P + R - ΔV (1) Where E is evaporation, P is precipitation, R is runoff discharge, and ΔV is the change of lake water volume. All the data used were calculated by in-situ observation data and satellite remote sensing data. Where P uses lakeside rain bucket data and GPM data; R Using radar current meters at major runoff locations, periodic manual flow measurements, and ERA-GloFAS datasets; ΔV uses the data of automatic water level meter 1 meter from the lake to obtain the water level change, and the monthly area change obtained from Sentinel-2 data can be calculated. The calculation formula is as follows: ΔV=1/3 (H_2-H_1)(A_1+A_2+√(A_1×A_2)) (2) ΔV=(H_2-H_1)A (3) H1, H2, A1 and A2 are the water level and lake area in different periods, respectively. When calculating the monthly change of water quantity, formula (2) is used. When calculating the daily change of water quantity, the lake area A of the current month is used because the daily change of lake area is small and can be ignored. 3) Data quality description The water balance method requires a large number of meteorological and hydrological observation parameters, especially for the observation of runoff recharge, there are many uncertain factors. In this dataset, the recharge amount of underground runoff is not taken into account, and the surface runoff recharge is also difficult to observe. Therefore, the dataset obtained by this method needs to be further updated with the increase of observation data, so as to obtain more accurate data as far as possible. 4) Data application achievements and prospects Surface evaporation is an important link in the water cycle and an important subject in hydrology. The advantage of using water balance method to calculate evaporation is that it can be applied under any weather conditions without being restricted by many conditions in micrometeorology. Under the condition of sufficient and reliable data, using water balance method to calculate evaporation can produce data with high precision. The more accurate evaporation amount obtained from the observed data is an important link in the study of lake water variation. By obtaining the evaporation amount of three lakes in different climate zones, the variation law of lake surface evaporation in different climate zones can be better explored. See the file for specific data content
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes intermediate data from RiboBase that generates translation efficiency (TE). The code to generate the files can be found at https://github.com/CenikLab/TE_model.
We uploaded demo HeLa .ribo files, but due to the large storage requirements of the full dataset, I recommend contacting Dr. Can Cenik directly to request access to the complete version of RiboBase if you need the original data.
The detailed explanation for each file:
human_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in human.
human_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in human.
human_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in human.
human_TE_rho.rda: TE proportional similarity data as genes by genes matrix in human.
mouse_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in mouse.
mouse_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in mouse.
mouse_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in mouse.
mouse_TE_rho.rda: TE proportional similarity data as genes by genes matrix in mouse.
All the data was passed quality control. There are 1054 mouse samples and 835 mouse samples:
* coverage > 0.1 X
* CDS percentage > 70%
* R2 between RNA and RIBO >= 0.188 (remove outliers)
All ribosome profiling data here is non-dedup winsorizing data paired with RNA-seq dedup data without winsorizing (even though it names as flatten, it just the same format of the naming)
####code
If you need to read rda data please use load("rdaname.rda") with R
If you need to calculate proportional similarity from clr data:
library(propr)
human_TE_homo_rho <- propr:::lr2rho(as.matrix(clr_data))
rownames(human_TE_homo_rho) <- colnames(human_TE_homo_rho) <- rownames(clr_data)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains files used to train and test the Multi-Configuration 23 (MC23) functional and to compare the results to other methods. It includes files to carry out electronic structure calculations. These include molecular geometries in xyz format, OpenMolcas input files for CASSCF calculations, converged CASSCF natural orbitals, OpenMolcas basis set files, and Gaussian 16 formatted checkpoint files for KS-DFT calculations. It also includes data used for data processing such as stoichiometries, absolute energies, and reference energies.
Each file in this dataset is a .tar.xz archive. One can extract them by the following command:
tar -xJf name_of_archive.tar.xz
Below is a description of the content of each archive.
gaussian_16_fchk.tar.xz contains Gaussian 16 formatted checkpoint files for all KS-DFT calculations used in this work. The files in the archive are named as functional/database/system.fchk
openmolcas_basis_set.tar.xz contains OpenMolcas basis set files used for multireference calculations. To reproduce the results in this work, the basis set files should be placed in the “basis_library” directory in the OpenMolcas installation location.
openmolcas_wave_function.tar.xz contains files needed by OpenMolcas to reproduce the CASSCF wave function used in this work. The files in the archive are named database/system.*.
gaussian_16_stoichiometry_energy.tar.xz and openmolcas_stoichiometry_energy.tar.xz contain files used for data processing.
The database names in the directory names use a slightly different convention than the ones in the article describing MC23. A prefix DS2_ or DS3_ is used to indicate the data set to which a database belongs, and the number of data points is removed from the database name. For example, the MR-MGN-BE8 database from Data Set 2 has a file name DS2_MR-MGN-BE.
Facebook
TwitterThis dataset provides the expected and determined concentrations of selected inorganic and organic analytes for spiked reagent-water samples (calibration standards and limit of quantitation standards) that were used to calculate detection limits by using the United States Environmental Protection Agency’s (USEPA) Method Detection Limit (MDL) version 1.11 or 2.0 procedures, ASTM International’s Within-Laboratory Critical Level standard procedure D7783-13, and, for five pharmaceutical compounds, by USEPA’s Lowest Concentration Minimum Reporting Level procedure. Also provided are determined concentration data for reagent-water laboratory blank samples, classified as either instrument blank or set blank samples, and reagent-water blind-blank samples submitted by the USGS Quality System Branch, that were used to calculate blank-based detection limits by using the USEPA MDL version 2.0 procedure or procedures described in National Water Quality Laboratory Technical Memorandum 2016.02, http://wwwnwql.cr.usgs.gov/tech_memos/nwql.2016-02.pdf. The determined detection limits are provided and compared in the related external publication at https://doi.org/10.1016/j.talanta.2021.122139.
Facebook
TwitterThe following data is being made available to applicants to the Medicare Shared Savings Program (Shared Savings Program), in order to allow them to calculate their share of services in each applicable Primary Service Area (PSA).
Facebook
TwitterPHREEQCI is a widely-used geochemical computer program that can be used to calculate chemical speciation and specific conductance of a natural water sample from its chemical composition (Charlton and Parkhurst, 2002; Parkhurst and Appelo, 1999). The specific conductance of a natural water calculated with PHREEQCI (Appelo, 2010) is reliable for pH greater than 4 and temperatures less than 35 °C (McCleskey and others, 2012b). An alternative method for calculating the specific conductance of natural waters is accurate over a large range of ionic strength (0.0004–0.7 mol/kg), pH (1–10), temperature (0–95 °C), and specific conductance (30–70,000 μS/cm) (McCleskey and others, 2012a). PHREEQCI input files for calculating the specific conductance of natural waters using the method described by McCleskey and others (2012a) have been created and are presented in this ScienceBase software release. The input files also incorporate three commonly used temperature compensation factors which can be used to determine the specific conductance at 25 °C: the constant (0.019), the non-linear (ISO-7888), and the temperature compensation factor described by McCleskey (2013) which is the most accurate for acidic waters (pH < 4). The specific conductance imbalance (SCI), which can be used along with charge balance as a quality-control check (McCleskey and others, 2012a), is also calculated: SCI (%) = 100 x (SC25 calculated – SC25 measured) / (SC25 measured) where SC25 calculated is the calculated specific conductance at 25 °C and SC25 measured is the measured specific conductance at 25 °C. Finally, the transport number (t), which is the relative contribution of a given ion to the overall electrical conductivity, for 30 ions is also calculated. Transport numbers are useful for interpreting specific conductance data and identify the ions that substantially contribute to the specific conductance. References Cited Appelo, C. A. J. 2017. Specific conductance: how to calculate, to use, and the pitfalls, [http://www.hydrochemistry.eu/exmpls/sc.html] Ball, J.W., and Nordstrom, D.K., 1991, User's manual for WATEQ4F, with revised thermodynamic data base and test cases for calculating speciation of major, trace, and redox elements in natural waters: U.S. Geological Survey Open-File Report 91-0183, p. 193. Charlton, S.R., and Parkhurst, D.L., 2002, PhreeqcI--A graphical user interface to the geochemical model PHREEQC: U.S. Geological Survey Fact Sheet FS-031-02, 2 p. McCleskey, R.B., Nordstrom, D.K., Ryan, J.N., and Ball, J.W., 2012a, A New Method of Calculating Electrical Conductivity With Applications to Natural Waters: Geochimica et Cosmochimica Acta, v. 77, p. 369-382. [http://www.sciencedirect.com/science/article/pii/S0016703711006181] McCleskey, R.B., Nordstrom, D.K., and Ryan, J.N. 2012b, Comparison of electrical conductivity calculation methods for natural waters. Limnology and Oceanography: Methods, v.10, p 952-967. [http://aslo.org/lomethods/free/2012/0952.html] McCleskey, R.B., 2013, New Method for Electrical Conductivity Temperature Compensation: Environmental Science & Technology, v. 47, p. 9874-9881. [http://dx.doi.org/10.1021/es402188r] Parkhurst, D.L., and Appelo, C.A.J., 1999, User's guide to PHREEQC (Version 2)--a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations: U.S. Geological Survey Water- Resources Investigations Report 99-4259, 312 p.