Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.
Facebook
TwitterThe Supporting Information includes derivations of equations for analytic approximations to the biphasic response function in terms of model sigmoid equations (Appendix A). In addition, transformation equations are given for parameter values that enforce a normalization between sigmoid and biphasic concentration-response functions (Appendix B). Fig A illustrates the sigmoid-like components of the positive and negative affectors composing the biphasic response function. Fig B illustrates the relative error between the sigmoid-like approximations for the left- and right-hand sides of the biphasic response and the full biphasic response. Fig C conceptualizes the ad hoc normalization method. Fig D illustrates how the sigmoid and biphasic response functions could be compared. Table A provides parameter values for the plots shown in Fig B. (PDF)
Facebook
TwitterThis dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Previous resting-state functional magnetic resonance imaging (rs-fMRI) studies frequently applied the spatial normalization on fMRI time series before the calculation of temporal features (here referred to as “Prenorm”). We hypothesized that calculating the rs-fMRI features, for example, functional connectivity (FC), regional homogeneity (ReHo), or amplitude of low-frequency fluctuation (ALFF) in individual space, before the spatial normalization (referred to as “Postnorm”) can be an improvement to avoid artifacts and increase the results’ reliability. We utilized two datasets: (1) simulated images where temporal signal-to-noise ratio (tSNR) is kept a constant and (2) an empirical fMRI dataset with 50 healthy young subjects. For simulated images, the tSNR is constant as generated in individual space but increased after Prenorm and intersubject variability of tSNR was induced. In contrast, tSNR was kept constant after Postnorm. Consistently, for empirical images, higher tSNR, ReHo, and FC (default mode network, seed in precuneus) and lower ALFF were found after Prenorm compared to those of Postnorm. Coefficient of variability of tSNR and ALFF was higher after Prenorm compared to those of Postnorm. Moreover, the significant correlation was found between simulated tSNR after Prenorm and empirical tSNR, ALFF, and ReHo after Prenorm, indicating algorithmic variation in empirical rs-fMRI features. Furthermore, comparing to Prenorm, ALFF and ReHo showed higher intraclass correlation coefficients between two serial scans after Postnorm. Our results indicated that Prenorm may induce algorithmic intersubject variability on tSNR and reduce its reliability, which also significantly affected ALFF and ReHo. We suggest using Postnorm instead of Prenorm for future rs-fMRI studies using ALFF/ReHo.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This paper describes an algorithm to assist in relative quantitation of peptide post-translational modifications using stable isotope labeling by amino acids in cell culture (SILAC). The described algorithm first determines the normalization factor and then calculates SILAC ratios for a list of target peptide masses using precursor ion abundances. Four yeast histone mutants were used to demonstrate the effectiveness of this approach for quantitation of peptide post-translational modifications changes. The details of the algorithm’s approach for normalization and peptide ratio calculation are described. The examples demonstrate the robustness of the approach as well as its utility to rapidly determine changes in peptide post-translational modifications within a protein.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Shown are mean±SD. Values were extrapolated from a smaller temperature range, using linear or exponential fits (details in Materials and Methods and Table S1). 1: As defined by the onset time of the impulse response, K1. 2: Characteristic time-constant defined as: τ = (f3dB)−1. 3: Information transfer rate (Shannon's formula). 4: Information transfer rate (triple extrapolation method).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
The difference between a simplified SIF-gc equation and a SIF-gc equation is the treatment of some parameters and is shown in https://doi.org/10.1016/j.rse.2024.114586.
In this dataset, the temporal resolution is 1 day, and the spatial resolution is 0.2 degree.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Exclusion of surveys without associated length; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dissolved organic matter molecular analyses were performed on a Solarix FT-ICR-MS equipped with a 15 Tesla superconducting magnet (Bruker Daltonic) using a an electrospray ionization source (Bruker Apollo II) in negative ion mode. Molecular formula calculation for all samples was performed using an Matlab (2010) routine that searches, with an error of < 0.5 ppm, for all potential combinations of elements including including the elements C∞, O∞, H∞, N = 4; S = 2 and P = 1. Combination of elements NSP, N2S, N3S, N4S, N2P, N3P, N4P, NS2, N2S2, N3S2, N4S2, S2P was not allowed. Mass peak intensities are normalized relative to the total molecular formulas in each sample according to previously published rules (Rossel et al., 2015; doi:10.1016/j.marchem.2015.07.002). The final data contained 7400 molecular formulae.
Facebook
TwitterThis visualization product displays the cigarette related items abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations without UNEP-MARLIN data.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data:
Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
Selection of surveys from non-MSFD monitoring, cleaning and research operations;
Exclusion of beaches without coordinates;
Selection of cigarette related items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata);
Exclusion of surveys without associated length;
Exclusion of surveys referring to the UNEP-MARLIN list: the UNEP-MARLIN protocol differs from the other types of monitoring in that cigarette butts are surveyed in a 10m square. To avoid comparing abundances from very different protocols, the choice has been made to distinguish in two maps the cigarette related items results associated with the UNEP-MARLIN list from the others;
Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of cigarette related items of the survey (normalized by 100 m) = Number of cigarette related items of the survey x (100 / survey length)
Then, this normalized number of cigarette related items is summed to obtain the total normalized number of cigarette related items for each survey. Finally, the median abundance of cigarette related items for each beach and year is calculated from these normalized abundances of cigarette related items per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account cigarette related items from other sources data (excluding UNEP-MARLIN protocol) for all years.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from the Marine Strategy Framework Directive (MSFD) monitoring surveys.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, ITA, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentage per beach per year from Marine Strategy Framework Directive (MSFD) monitoring surveys.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
To calculate percentages for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, ITA, TSG_ML, UNEP, UNEP_MARLIN). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution-NonCommercial 2.0 (CC BY-NC 2.0)https://creativecommons.org/licenses/by-nc/2.0/
License information was derived automatically
International Journal of Social Studies and public policy
Facebook
Twitterhttps://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/https://www.elsevier.com/about/policies/open-access-licenses/elsevier-user-license/cpc-license/
Abstract The Poincaré code is a Maple project package that aims to gather significant computer algebra normal form (and subsequent reduction) methods for handling nonlinear ordinary differential equations. As a first version, a set of fourteen easy-to-use Maple commands is introduced for symbolic creation of (improved variants of Poincaré’s) normal forms as well as their associated normalizing transformations. The software is the implementation by the authors of carefully studied and followed up sele...
Title of program: POINCARÉ Catalogue Id: AEPJ_v1_0
Nature of problem Computing structure-preserving normal forms near the origin for nonlinear vector fields.
Versions of this program held in the CPC repository in Mendeley Data AEPJ_v1_0; POINCARÉ; 10.1016/j.cpc.2013.04.003
This program has been imported from the CPC Program Library held at Queen's University Belfast (1969-2018)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This visualization product displays the total abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data:
- Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
- Selection of surveys from non-MSFD monitoring, cleaning and research operations;
- Exclusion of beaches without coordinates;
- Some categories & some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata).
- Exclusion of surveys without associated length;
- Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length)
Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Finally, the median abundance for each beach and year is calculated from these normalized abundances per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account other sources data for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about: Assigned molecular formulae and normalized intensities of measured samples from the Guaymas Basin. Please consult parent dataset @ https://doi.org/10.1594/PANGAEA.966710 for more information. Sample code is the same as in the parameter file.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This visualization product displays marine litter material categories percentage per year per beach from research & cleaning operations. EMODnet Chemistry included the gathering of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale. Preliminary processing were necessary to harmonize all the data : - Exclusion of OSPAR 1000 protocol, - Separation of monitoring surveys from research & cleaning operations - Exclusion of beaches with no coordinates - Normalization of survey lengths and survey numbers per year - Some categories & some litter types have been removed To calculate percentages, formula applied is : Material (%) = (total number of items (normalized at 100 m) of each material category)/(total number of items (normalized at 100 m) of all categories)*100 The material categories differ between reference lists. In order to apply a common procedure for all the surveys, the material categories have been harmonized. Eleven material categories have taken into account for this product and information on data processing and calculation are detailed in the document attached p14.
Facebook
TwitterThe Normalized Difference Vegetation Index (NDVI) is based on MODIS satellite data. The NDVI is based on 8 day maximum value composite MOD09Q1 (v006) reflectance products. The spatial resolution is 231 m. The NDVI is masked to the highest quality standards using the provided quality layers. Missing pixel values in the time series are linearly interpolated. Non-vegetatated areas are masked using the MODIS land cover product layer MCD12Q1 FAO-Land Cover Classification System 1 (LCCS1). The final product is regridded to the LAEA Projection (EPSG:3035). The NDVI is calculated using the formula NDVI = (NIR - Red) / (NIR + Red). The NDVI expresses the vitality of vegetation. The data is provided as 8 day measures. The time series is starting from 2001. The NDVI values range from -1 - 1, whereas high values correspond to healthy vegetation.
Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.