Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This paper describes an algorithm to assist in relative quantitation of peptide post-translational modifications using stable isotope labeling by amino acids in cell culture (SILAC). The described algorithm first determines the normalization factor and then calculates SILAC ratios for a list of target peptide masses using precursor ion abundances. Four yeast histone mutants were used to demonstrate the effectiveness of this approach for quantitation of peptide post-translational modifications changes. The details of the algorithm’s approach for normalization and peptide ratio calculation are described. The examples demonstrate the robustness of the approach as well as its utility to rapidly determine changes in peptide post-translational modifications within a protein.
Facebook
TwitterCalculations of precision on raw data and on normalized data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Two-tailed Mann-Whitney U test was used for calculating p-values. The statistical power was calculated for a Student's t-test using statistical parameters of log2 transformed expression data. Sample size (n) is number per group needed to obtain a power of at least 0.8. Nppb: group 2 vs. group 3; Vcam1: group 1 vs. group 3.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This updated version includes a Python script (glucose_analysis.py) that performs statistical evaluation of the glucose normalization process described in the associated thesis. The script supports key analyses, including normality assessment (Shapiro–Wilk test), variance homogeneity (Levene’s test), mean comparison (ANOVA), effect size estimation (Cohen’s d), and calculation of confidence intervals for the mean difference. These results validate the impact of Min-Max normalization on clinical data structure and usability within CDSS workflows. The script is designed to be reproducible and complements the processed dataset already included in this repository.
Facebook
TwitterThe values in this raster are unit-less scores ranging from 0 to 1 that represent normalized dollars per acre damage claims from antelope on Wyoming lands. This raster is one of 9 inputs used to calculate the "Normalized Importance Index."
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Previous resting-state functional magnetic resonance imaging (rs-fMRI) studies frequently applied the spatial normalization on fMRI time series before the calculation of temporal features (here referred to as “Prenorm”). We hypothesized that calculating the rs-fMRI features, for example, functional connectivity (FC), regional homogeneity (ReHo), or amplitude of low-frequency fluctuation (ALFF) in individual space, before the spatial normalization (referred to as “Postnorm”) can be an improvement to avoid artifacts and increase the results’ reliability. We utilized two datasets: (1) simulated images where temporal signal-to-noise ratio (tSNR) is kept a constant and (2) an empirical fMRI dataset with 50 healthy young subjects. For simulated images, the tSNR is constant as generated in individual space but increased after Prenorm and intersubject variability of tSNR was induced. In contrast, tSNR was kept constant after Postnorm. Consistently, for empirical images, higher tSNR, ReHo, and FC (default mode network, seed in precuneus) and lower ALFF were found after Prenorm compared to those of Postnorm. Coefficient of variability of tSNR and ALFF was higher after Prenorm compared to those of Postnorm. Moreover, the significant correlation was found between simulated tSNR after Prenorm and empirical tSNR, ALFF, and ReHo after Prenorm, indicating algorithmic variation in empirical rs-fMRI features. Furthermore, comparing to Prenorm, ALFF and ReHo showed higher intraclass correlation coefficients between two serial scans after Postnorm. Our results indicated that Prenorm may induce algorithmic intersubject variability on tSNR and reduce its reliability, which also significantly affected ALFF and ReHo. We suggest using Postnorm instead of Prenorm for future rs-fMRI studies using ALFF/ReHo.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
A ZIP file of computer code and output used in the numerical calculations for On The Finite-Size Lyapunov Exponent For The Schrodinger Operator With Skew-Shift Potential by Paul Michael Kielstra and Marius Lemm. The ZIP decompresses to about 26GB, containing multiple files:
201x201 bad set grid.txt: A list of 201x201=40401 evenly spaced points on [0, 1]x[0, 1], each written in the form (x, y) and followed by 30000 values of E which are probably bad for that point. This gives a total of 40401x30001=1212070401 lines.
Upper bounds.txt: individual upper bounds for equation (9) calculated at various points. The bound in this equation in the published paper is the worst of these.
E=0/N/2001x2001 grid.tsv: A tab-separated values file of 2001x2001=4004001 evenly spaced points on [0, 1]x[0, 1], with headers:
X: The x-coordinate of the point represented by the line in question.
Y: The y-coordinate.
Exact_x, Exact_y: The x- and y-coordinates to the maximum precision the computer used. In case, for instance, the x-coordinate is defined to be 0.5 but is actually 0.5000000000000001 in memory.
Matrix: The matrix generated at this point, modulo a certain normalization (see below).
Result: The log of the norm of the matrix. This has been corrected for the normalization -- it is calculated as if the matrix had never been normalized.
Normalizationcount: The actual matrix generated is too large to store in memory, so the matrix we store and output is (Matrix)x(Normalizer^Normalizationcount). We used a normalizer of 0.01.
This file was calculated with the values E=0, N=30000, lambda=1/2. The header line means that this file contains 4004001+1=4004002 lines in total.
E=0/N/2001x2001 random grid.tsv: As with the 2001x2001 grid.tsv file, but missing the exact_x and exact_y coordinates. Instead, the x and y values are both exact and randomly chosen. The lines in the file are in no particular order. This file contains the data for the Monte Carlo approximation used in the paper.
E=0/2N/2001x2001 grid.tsv: As with its counterpart in the folder labeled N, but calculated with N=60000 instead.
E=-2.495: As with its counterpart E=0, but everything is calculated with E=-2.495123260049612 (which we round to -2.49512326 in the paper). This folder also contains no random or Monte Carlo calculations.
Code/Multiplier.m: MATLAB code to generate the skew matrix at a given point.
Code/Iterator.m: MATLAB code to iterate over a series of points and call Multiplier at each.
Code/Striper.m: MATLAB code to split up the input space into a series of stripes and call Iterator on exactly one of them. We performed our calculations in parallel, each job consisting of calling Striper on a different stripe number.
Code/Badfinder.m: MATLAB code to take a point and output a series of E-values for which that point is in the bad set.
Code/BadSetIterator.m: As with Iterator.m, but calls Badfinder.
Code/BadSetStriper.m: As with Striper.m, but calls BadSetIterator. (The function in this file is also called Striper.)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset includes information about the level of impact Connect SoCal 2024 on transit travel distances and travel times in each Transportation Analysis Zone (TAZ) of the SCAG region based on 2050 estimates from SCAG's Travel Demand Model.This dataset was prepared to share more information from the maps in Connect SoCal 2024 Equity Analysis Technical Report. The development of this layer for the Equity Data Hub involved consolidating information, which led to a minor refinement in the normalization calculation to use Baseline population to normalize Baseline PHT/PMT and Plan population to normalize Plan PHT/PMT. In the Equity Analysis Technical Report, only Plan population is used to normalize the change in Transit PHT. This minor change does not affect the conclusions of the report. For more details on the methodology, please see the methodology section(s) of the Equity Analysis Technical Report: https://scag.ca.gov/sites/main/files/file-attachments/23-2987-tr-equity-analysis-final-040424.pdf?1712261887 For more details about SCAG's models, or to request model data, please see SCAG's website: https://scag.ca.gov/data-services-requests
Facebook
TwitterIMPORTANT! PLEASE READ DISCLAIMER BEFORE USING DATA. This dataset backcasts estimated modeled savings for a subset of 2007-2012 completed projects in the Home Performance with ENERGY STAR® Program against normalized savings calculated by an open source energy efficiency meter available at https://www.openee.io/. Open source code uses utility-grade metered consumption to weather-normalize the pre- and post-consumption data using standard methods with no discretionary independent variables. The open source energy efficiency meter allows private companies, utilities, and regulators to calculate energy savings from energy efficiency retrofits with increased confidence and replicability of results. This dataset is intended to lay a foundation for future innovation and deployment of the open source energy efficiency meter across the residential energy sector, and to help inform stakeholders interested in pay for performance programs, where providers are paid for realizing measurable weather-normalized results. To download the open source code, please visit the website at https://github.com/openeemeter/eemeter/releases
D I S C L A I M E R: Normalized Savings using open source OEE meter. Several data elements, including, Evaluated Annual Elecric Savings (kWh), Evaluated Annual Gas Savings (MMBtu), Pre-retrofit Baseline Electric (kWh), Pre-retrofit Baseline Gas (MMBtu), Post-retrofit Usage Electric (kWh), and Post-retrofit Usage Gas (MMBtu) are direct outputs from the open source OEE meter.
Home Performance with ENERGY STAR® Estimated Savings. Several data elements, including, Estimated Annual kWh Savings, Estimated Annual MMBtu Savings, and Estimated First Year Energy Savings represent contractor-reported savings derived from energy modeling software calculations and not actual realized energy savings. The accuracy of the Estimated Annual kWh Savings and Estimated Annual MMBtu Savings for projects has been evaluated by an independent third party. The results of the Home Performance with ENERGY STAR impact analysis indicate that, on average, actual savings amount to 35 percent of the Estimated Annual kWh Savings and 65 percent of the Estimated Annual MMBtu Savings. For more information, please refer to the Evaluation Report published on NYSERDA’s website at: http://www.nyserda.ny.gov/-/media/Files/Publications/PPSER/Program-Evaluation/2012ContractorReports/2012-HPwES-Impact-Report-with-Appendices.pdf.
This dataset includes the following data points for a subset of projects completed in 2007-2012: Contractor ID, Project County, Project City, Project ZIP, Climate Zone, Weather Station, Weather Station-Normalization, Project Completion Date, Customer Type, Size of Home, Volume of Home, Number of Units, Year Home Built, Total Project Cost, Contractor Incentive, Total Incentives, Amount Financed through Program, Estimated Annual kWh Savings, Estimated Annual MMBtu Savings, Estimated First Year Energy Savings, Evaluated Annual Electric Savings (kWh), Evaluated Annual Gas Savings (MMBtu), Pre-retrofit Baseline Electric (kWh), Pre-retrofit Baseline Gas (MMBtu), Post-retrofit Usage Electric (kWh), Post-retrofit Usage Gas (MMBtu), Central Hudson, Consolidated Edison, LIPA, National Grid, National Fuel Gas, New York State Electric and Gas, Orange and Rockland, Rochester Gas and Electric.
How does your organization use this dataset? What other NYSERDA or energy-related datasets would you like to see on Open NY? Let us know by emailing OpenNY@nyserda.ny.gov.
Facebook
TwitterFold changes were calculated as mean condition B/mean condition A, after mTIC normalization.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the features and probabilites of ten different functions. Each dataset is saved using numpy arrays. \item The data set \textit{Arc} corresponds to a two-dimensional random sample drawn from a random vector $$X=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)=\mathcal{N}(x_2|0,4)\mathcal{N}(x_1|0.25x_2^2,1)$$ where $$\mathcal{N}(u|\mu,\sigma^2)$$ denotes the density function of a normal distribution with mean $$\mu$$ and variance $$\sigma^2$$. \cite{Papamakarios2017} used this data set to evaluate his neural density estimation methods. \item The data set \textit{Potential 1} corresponds to a two-dimensional random sample drawn from a random vector $$X=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)=\frac{1}{2}\left(\frac{||x||-2}{0.4}\right)^2 - \ln{\left(\exp\left\{-\frac{1}{2}\left[\frac{x_1-2}{0.6}\right]^2\right\}+\exp\left\{-\frac{1}{2}\left[\frac{x_1+2}{0.6}\right]^2\right\}\right)}$$ with a normalizing constant of approximately 6.52 calculated by Monte Carlo integration. \item The data set \textit{Potential 2} corresponds to a two-dimensional random sample drawn from a random vector $$X=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)=\frac{1}{2}\left[ \frac{x_2-w_1(x)}{0.4}\right]^2$$ where $$w_1(x)=\sin{(\frac{2\pi x_1}{4})}$$ with a normalizing constant of approximately 8 calculated by Monte Carlo integration. \item The data set \textit{Potential 3} corresponds to a two-dimensional random sample drawn from a random vector $$x=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)= - \ln{\left(\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)}{0.35}\right]^2\right\}+\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)+w_2(x)}{0.35}^2\right]\right\}\right)}$$ where $$w_1(x)=\sin{(\frac{2\pi x_1}{4})}$$ and $$w_2(x)=3 \exp \left\{-\frac{1}{2}\left[ \frac{x_1-1}{0.6}\right]^2\right\}$$ with a normalizing constant of approximately 13.9 calculated by Monte Carlo integration. \item The data set \textit{Potential 4} corresponds to a two-dimensional random sample drawn from a random vector $$x=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)= - \ln{\left(\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)}{0.4}\right]^2\right\}+\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)+w_3(x)}{0.35}^2\right]\right\}\right)}$$ where $$w_1(x)=\sin{(\frac{2\pi x_1}{4})}$$, $$w_3(x)=3 \sigma \left(\left[ \frac{x_1-1}{0.3}\right]^2\right)$$, and $$\sigma(x)= \frac{1}{1+\exp(x)}$$ with a normalizing constant of approximately 13.9 calculated by Monte Carlo integration. \item The data set \textit{2D mixture} corresponds to a two-dimensional random sample drawn from the random vector $$x=(X_1, X_2)$$ with a probability density function given by $$f(x) = \frac{1}{2}\mathcal{N}(x|\mu_1,\Sigma_1) + \frac{1}{2}\mathcal{N}(x|\mu_2,\Sigma_2)$$ with means and covariance matrices $$\mu_1 = [1, -1]^T$$, $$\mu_2 = [-2, 2]^T$$, $$\Sigma_1=\left[\begin{array}{cc} 1 & 0 \\ 0 & 2 \end{array}\right]$$, and $$\Sigma_1=\left[\begin{array}{cc} 2 & 0 \\ 0 & 1 \end{array}\right]$$ \item The data set \textit{10D-mixture} corresponds to a 10-dimensional random sample drawn from the random vector $$x=(X_1,\cdots,X_{10})$$ with a mixture of four diagonal normal probability density functions $$\mathcal{N}(X_i|\mu_i, \sigma_i)$$, where each $$\mu_i$$ is drawn uniformly in the interval $$[-0.5,0.5]$$, and the $$\sigma_i$$ is drawn uniformly in the interval $$[-0.01, 0.5]$$. Each diagonal normal probability density has the same probability of being drawn $$1/4$$.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains values of criteria of alternatives expressed by Z-numbers. Z-numbers expressed by membership function, not in linguistic form. The values are in initial form, non-normalized.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Technical biases are introduced in omics data sets during data generation and interfere with the ability to study biological mechanisms. Several normalization approaches have been proposed to minimize the effects of such biases, but fluctuations in the electrospray current during liquid chromatography–mass spectrometry gradients cause local and sample-specific bias not considered by most approaches. Here we introduce a software named NormalyzerDE that includes a generic retention time (RT)-segmented approach compatible with a wide range of global normalization approaches to reduce the effects of time-resolved bias. The software offers straightforward access to multiple normalization methods, allows for data set evaluation and normalization quality assessment as well as subsequent or independent differential expression analysis using the empirical Bayes Limma approach. When evaluated on two spike-in data sets the RT-segmented approaches outperformed conventional approaches by detecting more peptides (8–36%) without loss of precision. Furthermore, differential expression analysis using the Limma approach consistently increased recall (2–35%) compared to analysis of variance. The combination of RT-normalization and Limma was in one case able to distinguish 108% (2597 vs 1249) more spike-in peptides compared to traditional approaches. NormalyzerDE provides widely usable tools for performing normalization and evaluating the outcome and makes calculation of subsequent differential expression statistics straightforward. The program is available as a web server at http://quantitativeproteomics.org/normalyzerde.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Surface reflectance is a critical physical variable that affects the energy budget in land-atmosphere interactions, feature recognition and classification, and climate change research. This dataset uses the relative radiometric normalization method, and takes the Landsat-8 Operational Land Imager (OLI) surface reflectance products as the reference image to normalize the GF-1 satellite WFV sensor cloud-free images of Shandong Province in 2018. Relative radiometric normalization processing mainly includes atmospheric correction, image resampling, image registration, mask, extract the no-change pixels and calculate normalization coefficients. After relative radiometric normalization, the no-change pixels of each GF-1 WFV image and its reference image, R2 is 0.7295 above, RMSE is below 0.0172. The surface reflectance accuracy of GF-1 WFV image is improved, which can be used in cooperation with Landsat data to provide data support for remote sensing quantitative inversion. This dataset is in GeoTIFF format, and the spatial resolution of the image is 16 m.
Facebook
TwitterThis metadata record describes 99 streamflow (referred to as flow) metrics calculated using the observed flow records at 1851 streamflow gauges across the conterminous United States from 1950 to 2018. Calculation of these metrics are often used as dependent variables in statistical models to make predictions of these flow metrics at ungaged locations. Specifically, this record describes (1) the U.S. Geological Survey streamgauge identification number, (2) the 1-, 7-, and 30-day consecutive minimum flow normalized by drainage area, DA (Q1/DA, Q7/DA, and Q30/DA [cfs/sq km]), (3) the 1st, 10th, 25th, 50th, 75th, 90th, and 99th nonexceedence flows normalized by DA (P01/DA, P10/DA, P25/DA, P50/DA, P75/DA, P90/DA, P99/DA [cfs/sq km]), (4) the annual mean flows normalized by DA (Mean/DA [cfs/sq km]), (5) the coefficient of variation of the annual minimums and maximum flows (Vmin and Vmax [dimensionless]), the average annual duration of flow pulses less than P10 and greater than P90 (Dl and Dh [number of days]), (6) the average annual number of flow pulses less than P10 and greater than P90 (Fl and Fh [number of events]), (7) the average annual skew of daily flows (Skew [dimensionless]), (8) the number of days where flow greater than the previous day divided by the total number of days (daily rises [dimensionless]), (9) the low- and high-flow timing metrics for winter, spring, summer, and fall (Winter_Tl, Spring_Tl, Summer_Tl, Fall_Tl, Winter_Th, Spring_Th, Summer_Th, and Fall_Th [dimensionless]), (10) the monthly nonexceedence flows normalized by DA (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, and DEC P'X'/DA where the 'X'=10, 20, 50, 80, and 90 [cfs/sq km]), and (11) monthly mean flow normalized by DA (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, and DEC mean/DA [cfs/sq km]). For more details for flow metrics related to (2) through (8) and (11), please see Eng, K., Grantham, T.E., Carlisle, D.M., and Wolock, D.M., 2017, Predictability and selection of hydrologic metrics in riverine ecohydrology: Freshwater Science, v. 36(4), p. 915-926 [Also available at https://doi.org/10.1086/694912]. For more details on (9), please see Eng, K., Carlisle, D.M., Grantham, T.E., Wolock, D.M., and Eng, R.L., 2019, Severity and extent of alterations to natural streamflow regimes based on hydrologic metrics in the conterminous United States, 1980-2014: U.S. Geological Survey Scientific Investigations Report 2019-5001, 25 p. [Also available at https://doi.org/10.3133/sir20195001]. For (10), all daily flow values for the month of interest across all years are ranked in descending order, and the flow values associated with 10, 20, 50, 80, and 90 percent of all flow values are assigned as the monthly percent values. The data are in a tab-delimited text format.
Facebook
TwitterWe characterize the textural and geochemical features of ocean crustal zircon recovered from plagiogranite, evolved gabbro, and metamorphosed ultramafic host-rocks collected along present-day slow and ultraslow spreading mid-ocean ridges (MORs). The geochemistry of 267 zircon grains was measured by sensitive high-resolution ion microprobe-reverse geometry at the USGS-Stanford Ion Microprobe facility. Three types of zircon are recognized based on texture and geochemistry. Most ocean crustal zircons resemble young magmatic zircon from other crustal settings, occurring as pristine, colorless euhedral (Type 1) or subhedral to anhedral (Type 2) grains. In these grains, Hf and most trace elements vary systematically with Ti, typically becoming enriched with falling Ti-in-zircon temperature. Ti-in-zircon temperatures range from 1,040 to 660°C (corrected for a TiO2 ~ 0.7, a SiO2 ~ 1.0, pressure ~ 2 kbar); intra-sample variation is typically ~60-15°C. Decreasing Ti correlates with enrichment in Hf to ~2 wt%, while additional Hf-enrichment occurs at relatively constant temperature. Trends between Ti and U, Y, REE, and Eu/Eu* exhibit a similar inflection, which may denote the onset of eutectic crystallization; the inflection is well-defined by zircons from plagiogranite and implies solidus temperatures of ~680-740°C. A third type of zircon is defined as being porous and colored with chaotic CL zoning, and occurs in ~25% of rock samples studied. These features, along with high measured La, Cl, S, Ca, and Fe, and low (Sm/La)N ratios are suggestive of interaction with aqueous fluids. Non-porous, luminescent CL overgrowth rims on porous grains record uniform temperatures averaging 615 ± 26°C (2SD, n = 7), implying zircon formation below the wet-granite solidus and under water-saturated conditions. Zircon geochemistry reflects, in part, source region; elevated HREE coupled with low U concentrations allow effective discrimination of ~80% of zircon formed at modern MORs from zircon in continental crust. The geochemistry and textural observations reported here serve as an important database for comparison with detrital, xenocrystic, and metamorphosed mafic rock-hosted zircon populations to evaluate provenance.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
Facebook
TwitterThis dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Facebook
TwitterWith a Monte Carlo code ACAT, we have calculated sputtering yield of fifteen fusion-relevant mono-atomic materials (Be, B, C, Al, Si, Ti, Cr, Fe, Co, Ni, Cu, Zr, Mo, W, Re) with obliquely incident light-ions H+, D+, T+,, He+) at incident energies of 50 eV to 10 keV. An improved formula for dependence of normalized sputtering yield on incident-angle has been fitted to the ACAT data normalized by the normal-incidence data to derive the best-fit values of the three physical variables included in the formula vs. incident energy. We then have found suitable functions of incident energy that fit these values most closely. The average relative difference between the normalized ACAT data and the formula with these functions has been shown to be less than 10 % in most cases and less than 20 % for the rest at the incident energies taken up for all of the combinations of the projectiles and the target materials considered. We have also compared the calculated data and the formula with available normalized experimental ones for given incident energies. The best-fit values of the parameters included in the functions have been tabulated in tables for all of the combinations for use. / Keywords: Sputtering, Erosion, Plasma-material interactions, First wall materials, Fitting formula, Monte-Carlo method, binary collision approximation, computer simulation【リソース】Fulltext
Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.