Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first column of the table provides the names of the methods used to combine -values investigated in our study. The second column lists the reference number cited in this paper for the publication (Ref) corresponding to the method used. The third column provides the equation number for the method distribution function used to compute the formula -value. The fourth column indicates if a method equation can accommodate (acc.) weight when combining -value. The fifth column gives the normalization (nor.) procedure used to normalize the weights. Finally, the last column conveys the information about a method's capability to account for correlation (corr.) between -values.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
EMG data were normalized using Max-Min strategy. For comparison across all subjects, ʃIEMG values were normalized through following formula. the result of this equation ranged all the ʃIEMG values in to -1 to +1 ʃIEMGN = ʃIEMGi / ʃIEMGMAX
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Shown are mean±SD. Values were extrapolated from a smaller temperature range, using linear or exponential fits (details in Materials and Methods and Table S1). 1: As defined by the onset time of the impulse response, K1. 2: Characteristic time-constant defined as: τ = (f3dB)−1. 3: Information transfer rate (Shannon's formula). 4: Information transfer rate (triple extrapolation method).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
This dataset is a seasonal time-series of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is provided in a separate dataset for each time step. Spring: March-April_May (_MAM) Summer: June-July-August (_JJA) Autumn: September-October-November (_SON) Winter: December-January-February (_DJF) Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This source code was published as supporting material for the article: Ralph G. Andrzejak, Anaïs Espinoso, Eduardo García-Portugués, Arthur Pewsey, Jacopo Epifanio, Marc G. Leguia, Kaspar Schindler; High expectations on phase locking: Better quantifying the concentration of circular data. Chaos (2023); 33 (9): 091106. https://doi.org/10.1063/5.0166468
With a Monte Carlo code ACAT, we have calculated sputtering yield of fifteen fusion-relevant mono-atomic materials (Be, B, C, Al, Si, Ti, Cr, Fe, Co, Ni, Cu, Zr, Mo, W, Re) with obliquely incident light-ions H+, D+, T+,, He+) at incident energies of 50 eV to 10 keV. An improved formula for dependence of normalized sputtering yield on incident-angle has been fitted to the ACAT data normalized by the normal-incidence data to derive the best-fit values of the three physical variables included in the formula vs. incident energy. We then have found suitable functions of incident energy that fit these values most closely. The average relative difference between the normalized ACAT data and the formula with these functions has been shown to be less than 10 % in most cases and less than 20 % for the rest at the incident energies taken up for all of the combinations of the projectiles and the target materials considered. We have also compared the calculated data and the formula with available normalized experimental ones for given incident energies. The best-fit values of the parameters included in the functions have been tabulated in tables for all of the combinations for use. / Keywords: Sputtering, Erosion, Plasma-material interactions, First wall materials, Fitting formula, Monte-Carlo method, binary collision approximation, computer simulation【リソース】Fulltext
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Conserved miRNAs expressed in the drought-sensitive and -tolerant tomato genotypes. a, TPM: the expression of transcript per million on the basis of the normalization formula: normalized expression = (actual miRNA count/total count of mapped reads)*1,000,000. b, * and ** indicate a significant difference after drought stress. *: q-value ≤0.05 and |log2 (SD/SCK) | ≥ 1. **: q-value ≤0.01 and |log2 (SD/SCK) | ≥ 1. c, * and ** indicate a significant difference after drought stress. *: q-value ≤0.05 and |log2 (TD/TCK) | ≥ 1. **: q-value ≤0.01 and |log2 (TD/TCK) | ≥ 1. (XLSX 27 kb)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dissolved organic matter molecular analyses were performed on a Solarix FT-ICR-MS equipped with a 15 Tesla superconducting magnet (Bruker Daltonic) using a an electrospray ionization source (Bruker Apollo II) in negative ion mode. Molecular formula calculation for all samples was performed using an Matlab (2010) routine that searches, with an error of < 0.5 ppm, for all potential combinations of elements including including the elements C∞, O∞, H∞, N = 4; S = 2 and P = 1. Combination of elements NSP, N2S, N3S, N4S, N2P, N3P, N4P, NS2, N2S2, N3S2, N4S2, S2P was not allowed. Mass peak intensities are normalized relative to the total molecular formulas in each sample according to previously published rules (Rossel et al., 2015; doi:10.1016/j.marchem.2015.07.002). The final data contained 7400 molecular formulae.
Attribution-NonCommercial 2.0 (CC BY-NC 2.0)https://creativecommons.org/licenses/by-nc/2.0/
License information was derived automatically
International Journal of Social Studies and public policy
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes intermediate data from RiboBase that generates translation efficiency (TE). The code to generate the files can be found at https://github.com/CenikLab/TE_model.
We uploaded demo HeLa .ribo files, but due to the large storage requirements of the full dataset, I recommend contacting Dr. Can Cenik directly to request access to the complete version of RiboBase if you need the original data.
The detailed explanation for each file:
human_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in human.
human_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in human.
human_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in human.
human_TE_rho.rda: TE proportional similarity data as genes by genes matrix in human.
mouse_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in mouse.
mouse_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in mouse.
mouse_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in mouse.
mouse_TE_rho.rda: TE proportional similarity data as genes by genes matrix in mouse.
All the data was passed quality control. There are 1054 mouse samples and 835 mouse samples:
* coverage > 0.1 X
* CDS percentage > 70%
* R2 between RNA and RIBO >= 0.188 (remove outliers)
All ribosome profiling data here is non-dedup winsorizing data paired with RNA-seq dedup data without winsorizing (even though it names as flatten, it just the same format of the naming)
####code
If you need to read rda data please use load("rdaname.rda") with R
If you need to calculate proportional similarity from clr data:
library(propr)
human_TE_homo_rho <- propr:::lr2rho(as.matrix(clr_data))
rownames(human_TE_homo_rho) <- colnames(human_TE_homo_rho) <- rownames(clr_data)
Whether intrinsic molecular properties or extrinsic factors such as environmental conditions control the decomposition of natural organic matter across soil, marine and freshwater systems has been subject to debate. Comprehensive evaluations of the controls that molecular structure exerts on organic matter's persistence in the environment have been precluded by organic matter's extreme complexity. Here we examine dissolved organic matter from 109 Swedish lakes using ultrahigh-resolution mass spectrometry and optical spectroscopy to investigate the constraints on its persistence in the environment. We find that degradation processes preferentially remove oxidized, aromatic compounds, whereas reduced, aliphatic and N-containing compounds are either resistant to degradation or tightly cycled and thus persist in aquatic systems. The patterns we observe for individual molecules are consistent with our measurements of emergent bulk characteristics of organic matter at wide geographic and temporal scales, as reflected by optical properties. We conclude that intrinsic molecular properties are an important control of overall organic matter reactivity.
This visualization product displays the cigarette related items abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations without UNEP-MARLIN data.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data:
Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
Selection of surveys from non-MSFD monitoring, cleaning and research operations;
Exclusion of beaches without coordinates;
Selection of cigarette related items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata);
Exclusion of surveys without associated length;
Exclusion of surveys referring to the UNEP-MARLIN list: the UNEP-MARLIN protocol differs from the other types of monitoring in that cigarette butts are surveyed in a 10m square. To avoid comparing abundances from very different protocols, the choice has been made to distinguish in two maps the cigarette related items results associated with the UNEP-MARLIN list from the others;
Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of cigarette related items of the survey (normalized by 100 m) = Number of cigarette related items of the survey x (100 / survey length)
Then, this normalized number of cigarette related items is summed to obtain the total normalized number of cigarette related items for each survey. Finally, the median abundance of cigarette related items for each beach and year is calculated from these normalized abundances of cigarette related items per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account cigarette related items from other sources data (excluding UNEP-MARLIN protocol) for all years.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
This visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Exclusion of surveys without associated length; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
The difference between a simplified SIF-gc equation and a SIF-gc equation is the treatment of some parameters and is shown in https://doi.org/10.1016/j.rse.2024.114586.
In this dataset, the temporal resolution is 1 day, and the spatial resolution is 0.2 degree.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The IBEM dataset consists of 600 documents with a total number of 8272 pages, containing 29603 isolated and 137089 embedded Mathematical Expressions (MEs). The objective of the IBEM dataset is to facilitate the indexing and searching of MEs in massive collections of STEM documents. The dataset was built by parsing the LaTeX source files of documents from the KDD Cup Collection. Several experiments can be carried out with the IBEM dataset ground-truth (GT): ME detection and extraction, ME recognition, etc.
The dataset consists of the following files:
The dataset is partitioned into various sets as provided for the ICDAR 2021 Competition on Mathematical Formula Detection. The ground-truth related to this competition, which is included in this dataset version, can also be found here. More information about the competition can be found in the following paper:
D. Anitei, J.A. Sánchez, J.M. Fuentes, R. Paredes, and J.M. Benedí. ICDAR 2021 Competition on Mathematical Formula Detection. In ICDAR, pages 783–795, 2021.
For ME recognition tasks, we recommend rendering the “latex_expand” version of the formulae in order to create standalone expressions that have the same visual representation as MEs found in the original documents (see attached python script “extract_GT.py”). Extracting MEs from the documents based on coordinates is more complex, as special care is needed to concatenate the fragments of split expressions. Baseline results for ME recognition tasks will soon be made available.
https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Abstract The Poincaré code is a Maple project package that aims to gather significant computer algebra normal form (and subsequent reduction) methods for handling nonlinear ordinary differential equations. As a first version, a set of fourteen easy-to-use Maple commands is introduced for symbolic creation of (improved variants of Poincaré’s) normal forms as well as their associated normalizing transformations. The software is the implementation by the authors of carefully studied and followed up sele...
Title of program: POINCARÉ Catalogue Id: AEPJ_v1_0
Nature of problem Computing structure-preserving normal forms near the origin for nonlinear vector fields.
Versions of this program held in the CPC repository in Mendeley Data AEPJ_v1_0; POINCARÉ; 10.1016/j.cpc.2013.04.003
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2018)
This visualization product displays the total abundance of marine macro-litter (> 2.5cm) per beach per year from Marine Strategy Framework Directive (MSFD) monitoring surveys.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Some categories & some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata); - Normalization of survey lengths to 100m & 1 survey / year: in some cases, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Finally, the median abundance for each beach and year is calculated from these normalized abundances per survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
Percentiles 50, 75, 95 & 99 have been calculated taking into account MSFD data for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that it doesn't exist, but that no information has been entered in the Marine Litter Database for this area.
This visualization product displays the cigarette related items abundance of marine macro-litter (> 2.5cm) per beach per year from Marine Strategy Framework Directive (MSFD) monitoring surveys without UNEP-MARLIN data.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Selection of cigarette related items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata); - Exclusion of surveys referring to the UNEP-MARLIN list: the UNEP-MARLIN protocol differs from the other types of monitoring in that cigarette butts are surveyed in a 10m square. To avoid comparing abundances from very different protocols, the choice has been made to distinguish in two maps the cigarette related items results associated with the UNEP-MARLIN list from the others; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of cigarette related items of the survey (normalized by 100 m) = Number of cigarette related items of the survey x (100 / survey length) Then, this normalized number of cigarette related items is summed to obtain the total normalized number of cigarette related items for each survey. Finally, the median abundance of cigarette related items for each beach and year is calculated from these normalized abundances of cigarette related items per survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
Percentiles 50, 75, 95 & 99 have been calculated taking into account cigarette related items from MSFD monitoring data (excluding UNEP-MARLIN protocol) for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The first column of the table provides the names of the methods used to combine -values investigated in our study. The second column lists the reference number cited in this paper for the publication (Ref) corresponding to the method used. The third column provides the equation number for the method distribution function used to compute the formula -value. The fourth column indicates if a method equation can accommodate (acc.) weight when combining -value. The fifth column gives the normalization (nor.) procedure used to normalize the weights. Finally, the last column conveys the information about a method's capability to account for correlation (corr.) between -values.