Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
The difference between a simplified SIF-gc equation and a SIF-gc equation is the treatment of some parameters and is shown in https://doi.org/10.1016/j.rse.2024.114586.
In this dataset, the temporal resolution is 1 day, and the spatial resolution is 0.2 degree.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Nomenclature and symbols.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In Binding Diffusion* (14), ck represents the concentration of ligand receptor complex in the kth image, ckM is the mobile fraction of c, β is the immobile fraction, and τ is the time needed for each image scan. Finally, g1 and g2 are the bleaching functions for bounded (c) and free (u) proteins.
Facebook
TwitterThis dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Facebook
Twitterhttps://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Abstract The Poincaré code is a Maple project package that aims to gather significant computer algebra normal form (and subsequent reduction) methods for handling nonlinear ordinary differential equations. As a first version, a set of fourteen easy-to-use Maple commands is introduced for symbolic creation of (improved variants of Poincaré’s) normal forms as well as their associated normalizing transformations. The software is the implementation by the authors of carefully studied and followed up sele...
Title of program: POINCARÉ Catalogue Id: AEPJ_v1_0
Nature of problem Computing structure-preserving normal forms near the origin for nonlinear vector fields.
Versions of this program held in the CPC repository in Mendeley Data AEPJ_v1_0; POINCARÉ; 10.1016/j.cpc.2013.04.003
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2018)
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
A ZIP file of computer code and output used in the numerical calculations for On The Finite-Size Lyapunov Exponent For The Schrodinger Operator With Skew-Shift Potential by Paul Michael Kielstra and Marius Lemm. The ZIP decompresses to about 26GB, containing multiple files:
201x201 bad set grid.txt: A list of 201x201=40401 evenly spaced points on [0, 1]x[0, 1], each written in the form (x, y) and followed by 30000 values of E which are probably bad for that point. This gives a total of 40401x30001=1212070401 lines.
Upper bounds.txt: individual upper bounds for equation (9) calculated at various points. The bound in this equation in the published paper is the worst of these.
E=0/N/2001x2001 grid.tsv: A tab-separated values file of 2001x2001=4004001 evenly spaced points on [0, 1]x[0, 1], with headers:
X: The x-coordinate of the point represented by the line in question.
Y: The y-coordinate.
Exact_x, Exact_y: The x- and y-coordinates to the maximum precision the computer used. In case, for instance, the x-coordinate is defined to be 0.5 but is actually 0.5000000000000001 in memory.
Matrix: The matrix generated at this point, modulo a certain normalization (see below).
Result: The log of the norm of the matrix. This has been corrected for the normalization -- it is calculated as if the matrix had never been normalized.
Normalizationcount: The actual matrix generated is too large to store in memory, so the matrix we store and output is (Matrix)x(Normalizer^Normalizationcount). We used a normalizer of 0.01.
This file was calculated with the values E=0, N=30000, lambda=1/2. The header line means that this file contains 4004001+1=4004002 lines in total.
E=0/N/2001x2001 random grid.tsv: As with the 2001x2001 grid.tsv file, but missing the exact_x and exact_y coordinates. Instead, the x and y values are both exact and randomly chosen. The lines in the file are in no particular order. This file contains the data for the Monte Carlo approximation used in the paper.
E=0/2N/2001x2001 grid.tsv: As with its counterpart in the folder labeled N, but calculated with N=60000 instead.
E=-2.495: As with its counterpart E=0, but everything is calculated with E=-2.495123260049612 (which we round to -2.49512326 in the paper). This folder also contains no random or Monte Carlo calculations.
Code/Multiplier.m: MATLAB code to generate the skew matrix at a given point.
Code/Iterator.m: MATLAB code to iterate over a series of points and call Multiplier at each.
Code/Striper.m: MATLAB code to split up the input space into a series of stripes and call Iterator on exactly one of them. We performed our calculations in parallel, each job consisting of calling Striper on a different stripe number.
Code/Badfinder.m: MATLAB code to take a point and output a series of E-values for which that point is in the bad set.
Code/BadSetIterator.m: As with Iterator.m, but calls Badfinder.
Code/BadSetStriper.m: As with Striper.m, but calls BadSetIterator. (The function in this file is also called Striper.)
Facebook
TwitterWith a Monte Carlo code ACAT, we have calculated sputtering yield of fifteen fusion-relevant mono-atomic materials (Be, B, C, Al, Si, Ti, Cr, Fe, Co, Ni, Cu, Zr, Mo, W, Re) with obliquely incident light-ions H+, D+, T+,, He+) at incident energies of 50 eV to 10 keV. An improved formula for dependence of normalized sputtering yield on incident-angle has been fitted to the ACAT data normalized by the normal-incidence data to derive the best-fit values of the three physical variables included in the formula vs. incident energy. We then have found suitable functions of incident energy that fit these values most closely. The average relative difference between the normalized ACAT data and the formula with these functions has been shown to be less than 10 % in most cases and less than 20 % for the rest at the incident energies taken up for all of the combinations of the projectiles and the target materials considered. We have also compared the calculated data and the formula with available normalized experimental ones for given incident energies. The best-fit values of the parameters included in the functions have been tabulated in tables for all of the combinations for use. / Keywords: Sputtering, Erosion, Plasma-material interactions, First wall materials, Fitting formula, Monte-Carlo method, binary collision approximation, computer simulation【リソース】Fulltext
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes intermediate data from RiboBase that generates translation efficiency (TE). The code to generate the files can be found at https://github.com/CenikLab/TE_model.
We uploaded demo HeLa .ribo files, but due to the large storage requirements of the full dataset, I recommend contacting Dr. Can Cenik directly to request access to the complete version of RiboBase if you need the original data.
The detailed explanation for each file:
human_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in human.
human_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in human.
human_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in human.
human_TE_rho.rda: TE proportional similarity data as genes by genes matrix in human.
mouse_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in mouse.
mouse_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in mouse.
mouse_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in mouse.
mouse_TE_rho.rda: TE proportional similarity data as genes by genes matrix in mouse.
All the data was passed quality control. There are 1054 mouse samples and 835 mouse samples:
* coverage > 0.1 X
* CDS percentage > 70%
* R2 between RNA and RIBO >= 0.188 (remove outliers)
All ribosome profiling data here is non-dedup winsorizing data paired with RNA-seq dedup data without winsorizing (even though it names as flatten, it just the same format of the naming)
####code
If you need to read rda data please use load("rdaname.rda") with R
If you need to calculate proportional similarity from clr data:
library(propr)
human_TE_homo_rho <- propr:::lr2rho(as.matrix(clr_data))
rownames(human_TE_homo_rho) <- colnames(human_TE_homo_rho) <- rownames(clr_data)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The primary aim of this study is to explore the influence of social media on university students’ revisit intention in sports tourism, using Expectation-Confirmation Model and the Uses and Gratifications Theory. A structured questionnaire was distributed to a random sample of 435 students from three universities in Hubei Province to measure their self-reported responses across six constructs: perceived usefulness, information quality, perceived enjoyment, electronic word-of-mouth (eWOM), satisfaction, and revisit intention. Employing a hybrid approach of Structural Equation Modeling (SEM) and Artificial Neural Networks (ANN), the study explains the non-compensatory and non-linear relationships between predictive factors and university students’ revisit intention in sports tourism. The results indicate that information quality, perceived enjoyment, satisfaction, and eWOM are significant direct predictors of revisit intention in sports tourism. In contrast, the direct influence of perceived usefulness on revisit intention is insignificant. ANN analysis revealed the normalized importance ranking of the predictors as follows: eWOM, information quality, satisfaction, perceived enjoyment, and perceived usefulness. This study not only provides new insights into the existing literature on the impact of social media on students’ tourism behavior but also serves as a valuable reference for future research on tourism behavior.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.