Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first column of the table provides the names of the methods used to combine -values investigated in our study. The second column lists the reference number cited in this paper for the publication (Ref) corresponding to the method used. The third column provides the equation number for the method distribution function used to compute the formula -value. The fourth column indicates if a method equation can accommodate (acc.) weight when combining -value. The fifth column gives the normalization (nor.) procedure used to normalize the weights. Finally, the last column conveys the information about a method's capability to account for correlation (corr.) between -values.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.
Attribution-NonCommercial 2.0 (CC BY-NC 2.0)https://creativecommons.org/licenses/by-nc/2.0/
License information was derived automatically
International Journal of Social Studies and public policy
https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Abstract The Poincaré code is a Maple project package that aims to gather significant computer algebra normal form (and subsequent reduction) methods for handling nonlinear ordinary differential equations. As a first version, a set of fourteen easy-to-use Maple commands is introduced for symbolic creation of (improved variants of Poincaré’s) normal forms as well as their associated normalizing transformations. The software is the implementation by the authors of carefully studied and followed up sele...
Title of program: POINCARÉ Catalogue Id: AEPJ_v1_0
Nature of problem Computing structure-preserving normal forms near the origin for nonlinear vector fields.
Versions of this program held in the CPC repository in Mendeley Data AEPJ_v1_0; POINCARÉ; 10.1016/j.cpc.2013.04.003
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2018)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Normalized lift LN equations for different lift-generating systems.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
The difference between a simplified SIF-gc equation and a SIF-gc equation is the treatment of some parameters and is shown in https://doi.org/10.1016/j.rse.2024.114586.
In this dataset, the temporal resolution is 1 day, and the spatial resolution is 0.2 degree.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
This dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Stable isotope data analyzed in the manuscript. (XLSX)
We characterize the textural and geochemical features of ocean crustal zircon recovered from plagiogranite, evolved gabbro, and metamorphosed ultramafic host-rocks collected along present-day slow and ultraslow spreading mid-ocean ridges (MORs). The geochemistry of 267 zircon grains was measured by sensitive high-resolution ion microprobe-reverse geometry at the USGS-Stanford Ion Microprobe facility. Three types of zircon are recognized based on texture and geochemistry. Most ocean crustal zircons resemble young magmatic zircon from other crustal settings, occurring as pristine, colorless euhedral (Type 1) or subhedral to anhedral (Type 2) grains. In these grains, Hf and most trace elements vary systematically with Ti, typically becoming enriched with falling Ti-in-zircon temperature. Ti-in-zircon temperatures range from 1,040 to 660°C (corrected for a TiO2 ~ 0.7, a SiO2 ~ 1.0, pressure ~ 2 kbar); intra-sample variation is typically ~60-15°C. Decreasing Ti correlates with enrichment in Hf to ~2 wt%, while additional Hf-enrichment occurs at relatively constant temperature. Trends between Ti and U, Y, REE, and Eu/Eu* exhibit a similar inflection, which may denote the onset of eutectic crystallization; the inflection is well-defined by zircons from plagiogranite and implies solidus temperatures of ~680-740°C. A third type of zircon is defined as being porous and colored with chaotic CL zoning, and occurs in ~25% of rock samples studied. These features, along with high measured La, Cl, S, Ca, and Fe, and low (Sm/La)N ratios are suggestive of interaction with aqueous fluids. Non-porous, luminescent CL overgrowth rims on porous grains record uniform temperatures averaging 615 ± 26°C (2SD, n = 7), implying zircon formation below the wet-granite solidus and under water-saturated conditions. Zircon geochemistry reflects, in part, source region; elevated HREE coupled with low U concentrations allow effective discrimination of ~80% of zircon formed at modern MORs from zircon in continental crust. The geochemistry and textural observations reported here serve as an important database for comparison with detrital, xenocrystic, and metamorphosed mafic rock-hosted zircon populations to evaluate provenance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper introduces the integration of the Cobb-Douglas (CD) utility model with quantum computation, utilizing the Clairaut-type differential formula. We propose a novel economic-physical model using envelope theory to establish a direct link with quantum entanglement, thereby defining emergent probabilities in the optimal utility function for two goods within a given expenditure limit. The study illuminates the interaction between the CD model and quantum computation, emphasizing the elucidation of system entropy and the role of Clairaut differential equations in understanding the utility's optimal envelopes and intercepts. Innovative algorithms utilizing the 2D-Clairaut differential equation are introduced for the quantum formulation of the CD function, showcasing accurate representation in quantum circuits for one and two qubits. Our empirical findings, validated through IBM-Q computer simulations, align with analytical predictions, demonstrating the robustness of our approach. This methodology articulates the utility-budget relationship within the CD function through a clear model based on envelope representation and canonical line equations, where normalized intercepts signify probabilities. The efficiency and precision of our results, especially in modeling one- and two-qubit quantum entanglement within econometrics, surpass those of IBM-Q simulations, which require extensive iterations for comparable accuracy. This underscores our method's effectiveness in merging economic models with quantum computation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In Binding Diffusion* (14), ck represents the concentration of ligand receptor complex in the kth image, ckM is the mobile fraction of c, β is the immobile fraction, and τ is the time needed for each image scan. Finally, g1 and g2 are the bleaching functions for bounded (c) and free (u) proteins.
The following data entry sheets are designed to quantify salinity normalized seawater total alkalinity anomalies (∆nTA) from inputs of offshore and coral reef total alkalinity (TA) and salinity (S) data while taking into account the various sources of uncertainty associated with these data normalizations and calculations to estimate the CDP for each reef observation (for details see Courtney et al., 2021). Only cells blocked in white should be modified on the "Data Entry" sheet and all cells blocked in gray are locked to protect the formulas from being modfied. Data for at least one offshore TA and S sample and one coral reef TA and S sample must be entered to display the ∆nTA and CDP for the given reef system. The equations herein will average all offshore TA and S data to calculate the ∆nTA to leverage all possible data. Additionally, the spreadsheets allow for the reference S to be set to the mean offshore or mean coral reef S and are calculated for a range of freshwater TA endmembers, including the option for a user defined value. ∆nTA is calculated as per the following equations from Courtney et al (2021). The CDP summary page also provides a number of summary graphs to visualize (1) whether there are apparent relationships between coral reef TA and S, (2) how the ∆nTA of the inputted data compares to global coral reef ∆TA data from Cyronak et al. (2018), (3) how the ∆nTA data varies spatially across the reef locations, and (4) how well the ∆nTA data covers a complete diel cycle. For further details on the uncertainties associated with the salinity normalization of coral reef data and relevant equations, please see the following publication: Courtney TA, Cyronak T, Griffin AJ, Andersson AJ (2021) Implications of salinity normalization of seawater total alkalinity in coral reef metabolism studies. PLOS One 16(12): e0261210. https://doi.org/10.1371/journal.pone.0261210 Please cite as: Courtney TA & Andersson AJ (2022) Calcification Dissolution Potential Tool for Excel: Version 1. https://doi.org/10.5281/zenodo.7051628
This visualization product displays marine macro-litter (> 2.5cm) material categories percentage per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Exclusion of surveys without associated length; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey.
To calculate percentages for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, TSG_ML, UNEP, UNEP_MARLIN). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The primary aim of this study is to explore the influence of social media on university students’ revisit intention in sports tourism, using Expectation-Confirmation Model and the Uses and Gratifications Theory. A structured questionnaire was distributed to a random sample of 435 students from three universities in Hubei Province to measure their self-reported responses across six constructs: perceived usefulness, information quality, perceived enjoyment, electronic word-of-mouth (eWOM), satisfaction, and revisit intention. Employing a hybrid approach of Structural Equation Modeling (SEM) and Artificial Neural Networks (ANN), the study explains the non-compensatory and non-linear relationships between predictive factors and university students’ revisit intention in sports tourism. The results indicate that information quality, perceived enjoyment, satisfaction, and eWOM are significant direct predictors of revisit intention in sports tourism. In contrast, the direct influence of perceived usefulness on revisit intention is insignificant. ANN analysis revealed the normalized importance ranking of the predictors as follows: eWOM, information quality, satisfaction, perceived enjoyment, and perceived usefulness. This study not only provides new insights into the existing literature on the impact of social media on students’ tourism behavior but also serves as a valuable reference for future research on tourism behavior.
With a Monte Carlo code ACAT, we have calculated sputtering yield of fifteen fusion-relevant mono-atomic materials (Be, B, C, Al, Si, Ti, Cr, Fe, Co, Ni, Cu, Zr, Mo, W, Re) with obliquely incident light-ions H+, D+, T+,, He+) at incident energies of 50 eV to 10 keV. An improved formula for dependence of normalized sputtering yield on incident-angle has been fitted to the ACAT data normalized by the normal-incidence data to derive the best-fit values of the three physical variables included in the formula vs. incident energy. We then have found suitable functions of incident energy that fit these values most closely. The average relative difference between the normalized ACAT data and the formula with these functions has been shown to be less than 10 % in most cases and less than 20 % for the rest at the incident energies taken up for all of the combinations of the projectiles and the target materials considered. We have also compared the calculated data and the formula with available normalized experimental ones for given incident energies. The best-fit values of the parameters included in the functions have been tabulated in tables for all of the combinations for use. / Keywords: Sputtering, Erosion, Plasma-material interactions, First wall materials, Fitting formula, Monte-Carlo method, binary collision approximation, computer simulation【リソース】Fulltext
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Nomenclature and symbols.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For each explanatory variable the expected LVM values are calculated twice. First, based on a predictive equation developed for the OVER subgroup, next on a predictive equation developed for the NORM subgroup. In order to compare these linear regression coefficients differences between the paired expected LVM values were calculated and linear regression coefficients for the relationship between the differences and BMI z-score were tested.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The primary aim of this study is to explore the influence of social media on university students’ revisit intention in sports tourism, using Expectation-Confirmation Model and the Uses and Gratifications Theory. A structured questionnaire was distributed to a random sample of 435 students from three universities in Hubei Province to measure their self-reported responses across six constructs: perceived usefulness, information quality, perceived enjoyment, electronic word-of-mouth (eWOM), satisfaction, and revisit intention. Employing a hybrid approach of Structural Equation Modeling (SEM) and Artificial Neural Networks (ANN), the study explains the non-compensatory and non-linear relationships between predictive factors and university students’ revisit intention in sports tourism. The results indicate that information quality, perceived enjoyment, satisfaction, and eWOM are significant direct predictors of revisit intention in sports tourism. In contrast, the direct influence of perceived usefulness on revisit intention is insignificant. ANN analysis revealed the normalized importance ranking of the predictors as follows: eWOM, information quality, satisfaction, perceived enjoyment, and perceived usefulness. This study not only provides new insights into the existing literature on the impact of social media on students’ tourism behavior but also serves as a valuable reference for future research on tourism behavior.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first column of the table provides the names of the methods used to combine -values investigated in our study. The second column lists the reference number cited in this paper for the publication (Ref) corresponding to the method used. The third column provides the equation number for the method distribution function used to compute the formula -value. The fourth column indicates if a method equation can accommodate (acc.) weight when combining -value. The fifth column gives the normalization (nor.) procedure used to normalize the weights. Finally, the last column conveys the information about a method's capability to account for correlation (corr.) between -values.