Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first column of the table provides the names of the methods used to combine -values investigated in our study. The second column lists the reference number cited in this paper for the publication (Ref) corresponding to the method used. The third column provides the equation number for the method distribution function used to compute the formula -value. The fourth column indicates if a method equation can accommodate (acc.) weight when combining -value. The fifth column gives the normalization (nor.) procedure used to normalize the weights. Finally, the last column conveys the information about a method's capability to account for correlation (corr.) between -values.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This source code was published as supporting material for the article: Ralph G. Andrzejak, Anaïs Espinoso, Eduardo García-Portugués, Arthur Pewsey, Jacopo Epifanio, Marc G. Leguia, Kaspar Schindler; High expectations on phase locking: Better quantifying the concentration of circular data. Chaos (2023); 33 (9): 091106. https://doi.org/10.1063/5.0166468
https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Abstract The Poincaré code is a Maple project package that aims to gather significant computer algebra normal form (and subsequent reduction) methods for handling nonlinear ordinary differential equations. As a first version, a set of fourteen easy-to-use Maple commands is introduced for symbolic creation of (improved variants of Poincaré’s) normal forms as well as their associated normalizing transformations. The software is the implementation by the authors of carefully studied and followed up sele...
Title of program: POINCARÉ Catalogue Id: AEPJ_v1_0
Nature of problem Computing structure-preserving normal forms near the origin for nonlinear vector fields.
Versions of this program held in the CPC repository in Mendeley Data AEPJ_v1_0; POINCARÉ; 10.1016/j.cpc.2013.04.003
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2018)
Attribution-NonCommercial 2.0 (CC BY-NC 2.0)https://creativecommons.org/licenses/by-nc/2.0/
License information was derived automatically
International Journal of Social Studies and public policy
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Normalized lift LN equations for different lift-generating systems.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
The difference between a simplified SIF-gc equation and a SIF-gc equation is the treatment of some parameters and is shown in https://doi.org/10.1016/j.rse.2024.114586.
In this dataset, the temporal resolution is 1 day, and the spatial resolution is 0.2 degree.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
This dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This paper describes an algorithm to assist in relative quantitation of peptide post-translational modifications using stable isotope labeling by amino acids in cell culture (SILAC). The described algorithm first determines the normalization factor and then calculates SILAC ratios for a list of target peptide masses using precursor ion abundances. Four yeast histone mutants were used to demonstrate the effectiveness of this approach for quantitation of peptide post-translational modifications changes. The details of the algorithm’s approach for normalization and peptide ratio calculation are described. The examples demonstrate the robustness of the approach as well as its utility to rapidly determine changes in peptide post-translational modifications within a protein.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Title of program: MORSEFNS Catalogue Id: AAGM_v1_0
Nature of problem To produce the normalised eigenfunctions of the Schrodinger equation with the Morse potential.
Versions of this program held in the CPC repository in Mendeley Data AAGM_v1_0; MORSEFNS; 10.1016/0010-4655(72)90017-3
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2019)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dissolved organic matter molecular analyses were performed on a Solarix FT-ICR-MS equipped with a 15 Tesla superconducting magnet (Bruker Daltonic) using a an electrospray ionization source (Bruker Apollo II) in negative ion mode. Molecular formula calculation for all samples was performed using an Matlab (2010) routine that searches, with an error of < 0.5 ppm, for all potential combinations of elements including including the elements C∞, O∞, H∞, N = 4; S = 2 and P = 1. Combination of elements NSP, N2S, N3S, N4S, N2P, N3P, N4P, NS2, N2S2, N3S2, N4S2, S2P was not allowed. Mass peak intensities are normalized relative to the total molecular formulas in each sample according to previously published rules (Rossel et al., 2015; doi:10.1016/j.marchem.2015.07.002). The final data contained 7400 molecular formulae.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper introduces the integration of the Cobb-Douglas (CD) utility model with quantum computation, utilizing the Clairaut-type differential formula. We propose a novel economic-physical model using envelope theory to establish a direct link with quantum entanglement, thereby defining emergent probabilities in the optimal utility function for two goods within a given expenditure limit. The study illuminates the interaction between the CD model and quantum computation, emphasizing the elucidation of system entropy and the role of Clairaut differential equations in understanding the utility's optimal envelopes and intercepts. Innovative algorithms utilizing the 2D-Clairaut differential equation are introduced for the quantum formulation of the CD function, showcasing accurate representation in quantum circuits for one and two qubits. Our empirical findings, validated through IBM-Q computer simulations, align with analytical predictions, demonstrating the robustness of our approach. This methodology articulates the utility-budget relationship within the CD function through a clear model based on envelope representation and canonical line equations, where normalized intercepts signify probabilities. The efficiency and precision of our results, especially in modeling one- and two-qubit quantum entanglement within econometrics, surpass those of IBM-Q simulations, which require extensive iterations for comparable accuracy. This underscores our method's effectiveness in merging economic models with quantum computation.
The following data entry sheets are designed to quantify salinity normalized seawater total alkalinity anomalies (∆nTA) from inputs of offshore and coral reef total alkalinity (TA) and salinity (S) data while taking into account the various sources of uncertainty associated with these data normalizations and calculations to estimate the CDP for each reef observation (for details see Courtney et al., 2021). Only cells blocked in white should be modified on the "Data Entry" sheet and all cells blocked in gray are locked to protect the formulas from being modfied. Data for at least one offshore TA and S sample and one coral reef TA and S sample must be entered to display the ∆nTA and CDP for the given reef system. The equations herein will average all offshore TA and S data to calculate the ∆nTA to leverage all possible data. Additionally, the spreadsheets allow for the reference S to be set to the mean offshore or mean coral reef S and are calculated for a range of freshwater TA endmembers, including the option for a user defined value. ∆nTA is calculated as per the following equations from Courtney et al (2021). The CDP summary page also provides a number of summary graphs to visualize (1) whether there are apparent relationships between coral reef TA and S, (2) how the ∆nTA of the inputted data compares to global coral reef ∆TA data from Cyronak et al. (2018), (3) how the ∆nTA data varies spatially across the reef locations, and (4) how well the ∆nTA data covers a complete diel cycle. For further details on the uncertainties associated with the salinity normalization of coral reef data and relevant equations, please see the following publication: Courtney TA, Cyronak T, Griffin AJ, Andersson AJ (2021) Implications of salinity normalization of seawater total alkalinity in coral reef metabolism studies. PLOS One 16(12): e0261210. https://doi.org/10.1371/journal.pone.0261210 Please cite as: Courtney TA & Andersson AJ (2022) Calcification Dissolution Potential Tool for Excel: Version 1. https://doi.org/10.5281/zenodo.7051628
We characterize the textural and geochemical features of ocean crustal zircon recovered from plagiogranite, evolved gabbro, and metamorphosed ultramafic host-rocks collected along present-day slow and ultraslow spreading mid-ocean ridges (MORs). The geochemistry of 267 zircon grains was measured by sensitive high-resolution ion microprobe-reverse geometry at the USGS-Stanford Ion Microprobe facility. Three types of zircon are recognized based on texture and geochemistry. Most ocean crustal zircons resemble young magmatic zircon from other crustal settings, occurring as pristine, colorless euhedral (Type 1) or subhedral to anhedral (Type 2) grains. In these grains, Hf and most trace elements vary systematically with Ti, typically becoming enriched with falling Ti-in-zircon temperature. Ti-in-zircon temperatures range from 1,040 to 660°C (corrected for a TiO2 ~ 0.7, a SiO2 ~ 1.0, pressure ~ 2 kbar); intra-sample variation is typically ~60-15°C. Decreasing Ti correlates with enrichment in Hf to ~2 wt%, while additional Hf-enrichment occurs at relatively constant temperature. Trends between Ti and U, Y, REE, and Eu/Eu* exhibit a similar inflection, which may denote the onset of eutectic crystallization; the inflection is well-defined by zircons from plagiogranite and implies solidus temperatures of ~680-740°C. A third type of zircon is defined as being porous and colored with chaotic CL zoning, and occurs in ~25% of rock samples studied. These features, along with high measured La, Cl, S, Ca, and Fe, and low (Sm/La)N ratios are suggestive of interaction with aqueous fluids. Non-porous, luminescent CL overgrowth rims on porous grains record uniform temperatures averaging 615 ± 26°C (2SD, n = 7), implying zircon formation below the wet-granite solidus and under water-saturated conditions. Zircon geochemistry reflects, in part, source region; elevated HREE coupled with low U concentrations allow effective discrimination of ~80% of zircon formed at modern MORs from zircon in continental crust. The geochemistry and textural observations reported here serve as an important database for comparison with detrital, xenocrystic, and metamorphosed mafic rock-hosted zircon populations to evaluate provenance.
This visualization product displays marine macro-litter (> 2.5cm) material categories percentage per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Exclusion of surveys without associated length; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey.
To calculate percentages for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, TSG_ML, UNEP, UNEP_MARLIN). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
https://data.gov.tw/licensehttps://data.gov.tw/license
Background To determine the appropriate time of concomitant chemotherapy administration after antiangiogenic treatment, we investigated the timing and effect of bevacizumab administration on vascular normalization of metastatic brain tumors in breast cancer patients. Methods Eight patients who participated in a phase II trial for breast cancer-induced refractory brain metastases were enrolled and subjected to 4 dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) examinations that evaluated Peak, Slope, iAUC60, and Ktrans before and after treatment. The treatment comprised bevacizumab on Day 1, etoposide on Days 2–4, and cisplatin on Day 2 in a 21-day cycle for a maximum of 6 cycles. DCE-MRI was performed before treatment and at 1 h, 24 h, and 21 days after bevacizumab administration. Results Values of the 4 DCE-MRI parameters reduced after bevacizumab administration. Compared with baseline values, the mean reductions at 1 and 24 h were ?12.8 and ?24.7 % for Peak, ?46.6 and ?65.8 % for Slope, ?27.9 and ?55.5 % for iAUC60, and ?46.6 and ?63.9 % for Ktrans, respectively (all P < .05). The differences in the 1 and 24 h mean reductions were significant (all P < .05) for all the parameters. The generalized estimating equation linear regression analyses of the 4 DCE-MRI parameters revealed that vascular normalization peaked 24 h after bevacizumab administration. Conclusion Bevacizumab induced vascular normalization of brain metastases in humans at 1 and 24 h after administration, and the effect was significantly higher at 24 h than at 1 h.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Stable isotope data analyzed in the manuscript. (XLSX)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In Binding Diffusion* (14), ck represents the concentration of ligand receptor complex in the kth image, ckM is the mobile fraction of c, β is the immobile fraction, and τ is the time needed for each image scan. Finally, g1 and g2 are the bleaching functions for bounded (c) and free (u) proteins.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first column of the table provides the names of the methods used to combine -values investigated in our study. The second column lists the reference number cited in this paper for the publication (Ref) corresponding to the method used. The third column provides the equation number for the method distribution function used to compute the formula -value. The fourth column indicates if a method equation can accommodate (acc.) weight when combining -value. The fifth column gives the normalization (nor.) procedure used to normalize the weights. Finally, the last column conveys the information about a method's capability to account for correlation (corr.) between -values.