Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Shown are mean±SD. Values were extrapolated from a smaller temperature range, using linear or exponential fits (details in Materials and Methods and Table S1). 1: As defined by the onset time of the impulse response, K1. 2: Characteristic time-constant defined as: τ = (f3dB)−1. 3: Information transfer rate (Shannon's formula). 4: Information transfer rate (triple extrapolation method).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This paper describes an algorithm to assist in relative quantitation of peptide post-translational modifications using stable isotope labeling by amino acids in cell culture (SILAC). The described algorithm first determines the normalization factor and then calculates SILAC ratios for a list of target peptide masses using precursor ion abundances. Four yeast histone mutants were used to demonstrate the effectiveness of this approach for quantitation of peptide post-translational modifications changes. The details of the algorithm’s approach for normalization and peptide ratio calculation are described. The examples demonstrate the robustness of the approach as well as its utility to rapidly determine changes in peptide post-translational modifications within a protein.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
Facebook
TwitterThis dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset includes information about the level of impact Connect SoCal 2024 on transit travel distances and travel times in each Transportation Analysis Zone (TAZ) of the SCAG region based on 2050 estimates from SCAG's Travel Demand Model.This dataset was prepared to share more information from the maps in Connect SoCal 2024 Equity Analysis Technical Report. The development of this layer for the Equity Data Hub involved consolidating information, which led to a minor refinement in the normalization calculation to use Baseline population to normalize Baseline PHT/PMT and Plan population to normalize Plan PHT/PMT. In the Equity Analysis Technical Report, only Plan population is used to normalize the change in Transit PHT. This minor change does not affect the conclusions of the report. For more details on the methodology, please see the methodology section(s) of the Equity Analysis Technical Report: https://scag.ca.gov/sites/main/files/file-attachments/23-2987-tr-equity-analysis-final-040424.pdf?1712261887 For more details about SCAG's models, or to request model data, please see SCAG's website: https://scag.ca.gov/data-services-requests
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All three types of SIF-driven T models integrate canopy conductance (gc) with the Penman-Monteith model, differing in how gc is derived: from a SIFobs driven semi-mechanistic equation, a SIFsunlit and SIFshaded driven semi-mechanistic equation, and a SIFsunlit and SIFshaded driven machine learning model.
The difference between a simplified SIF-gc equation and a SIF-gc equation is the treatment of some parameters and is shown in https://doi.org/10.1016/j.rse.2024.114586.
In this dataset, the temporal resolution is 1 day, and the spatial resolution is 0.2 degree.
BL: SIFobs driven semi-mechanistic model
TL: SIFsunlit and SIFshaded driven semi-mechanistic model
hybrid models: SIFsunlit and SIFshaded driven machine learning model.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLeft ventricular mass normalization for body size is recommended, but a question remains: what is the best body size variable for this normalization—body surface area, height or lean body mass computed based on a predictive equation? Since body surface area and computed lean body mass are derivatives of body mass, normalizing for them may result in underestimation of left ventricular mass in overweight children. The aim of this study is to indicate which of the body size variables normalize left ventricular mass without underestimating it in overweight children.MethodsLeft ventricular mass assessed by echocardiography, height and body mass were collected for 464 healthy boys, 5–18 years old. Lean body mass and body surface area were calculated. Left ventricular mass z-scores computed based on reference data, developed for height, body surface area and lean body mass, were compared between overweight and non-overweight children. The next step was a comparison of paired samples of expected left ventricular mass, estimated for each normalizing variable based on two allometric equations—the first developed for overweight children, the second for children of normal body mass.ResultsThe mean of left ventricular mass z-scores is higher in overweight children compared to non-overweight children for normative data based on height (0.36 vs. 0.00) and lower for normative data based on body surface area (-0.64 vs. 0.00). Left ventricular mass estimated normalizing for height, based on the equation for overweight children, is higher in overweight children (128.12 vs. 118.40); however, masses estimated normalizing for body surface area and lean body mass, based on equations for overweight children, are lower in overweight children (109.71 vs. 122.08 and 118.46 vs. 120.56, respectively).ConclusionNormalization for body surface area and for computed lean body mass, but not for height, underestimates left ventricular mass in overweight children.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Note: Nor_Reads: Normalized reads, the results of Solexa sequencing, Normalization formula: Normalized expression = Actual miRNA count/Total count of clean reads×1,000,000; F_change: Fold_changes (Log2 Late lactation/Peak lactation), fold changes of miRNAs in both samples, – represents down regulation in late lactation; P_Value: P values manifest the significance of miRNAs differential expression between two samples; Sig_level: Significance_level, # represents no significant difference, * represents significant difference;
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Exclusion of surveys without associated length; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterThis visualization product displays the cigarette related items abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations without UNEP-MARLIN data.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data:
Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
Selection of surveys from non-MSFD monitoring, cleaning and research operations;
Exclusion of beaches without coordinates;
Selection of cigarette related items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata);
Exclusion of surveys without associated length;
Exclusion of surveys referring to the UNEP-MARLIN list: the UNEP-MARLIN protocol differs from the other types of monitoring in that cigarette butts are surveyed in a 10m square. To avoid comparing abundances from very different protocols, the choice has been made to distinguish in two maps the cigarette related items results associated with the UNEP-MARLIN list from the others;
Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of cigarette related items of the survey (normalized by 100 m) = Number of cigarette related items of the survey x (100 / survey length)
Then, this normalized number of cigarette related items is summed to obtain the total normalized number of cigarette related items for each survey. Finally, the median abundance of cigarette related items for each beach and year is calculated from these normalized abundances of cigarette related items per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account cigarette related items from other sources data (excluding UNEP-MARLIN protocol) for all years.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dissolved organic matter molecular analyses were performed on a Solarix FT-ICR-MS equipped with a 15 Tesla superconducting magnet (Bruker Daltonic) using a an electrospray ionization source (Bruker Apollo II) in negative ion mode. Molecular formula calculation for all samples was performed using an Matlab (2010) routine that searches, with an error of < 0.5 ppm, for all potential combinations of elements including including the elements C∞, O∞, H∞, N = 4; S = 2 and P = 1. Combination of elements NSP, N2S, N3S, N4S, N2P, N3P, N4P, NS2, N2S2, N3S2, N4S2, S2P was not allowed. Mass peak intensities are normalized relative to the total molecular formulas in each sample according to previously published rules (Rossel et al., 2015; doi:10.1016/j.marchem.2015.07.002). The final data contained 7400 molecular formulae.
Facebook
TwitterWith a Monte Carlo code ACAT, we have calculated sputtering yield of fifteen fusion-relevant mono-atomic materials (Be, B, C, Al, Si, Ti, Cr, Fe, Co, Ni, Cu, Zr, Mo, W, Re) with obliquely incident light-ions H+, D+, T+,, He+) at incident energies of 50 eV to 10 keV. An improved formula for dependence of normalized sputtering yield on incident-angle has been fitted to the ACAT data normalized by the normal-incidence data to derive the best-fit values of the three physical variables included in the formula vs. incident energy. We then have found suitable functions of incident energy that fit these values most closely. The average relative difference between the normalized ACAT data and the formula with these functions has been shown to be less than 10 % in most cases and less than 20 % for the rest at the incident energies taken up for all of the combinations of the projectiles and the target materials considered. We have also compared the calculated data and the formula with available normalized experimental ones for given incident energies. The best-fit values of the parameters included in the functions have been tabulated in tables for all of the combinations for use. / Keywords: Sputtering, Erosion, Plasma-material interactions, First wall materials, Fitting formula, Monte-Carlo method, binary collision approximation, computer simulation【リソース】Fulltext
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Previous resting-state functional magnetic resonance imaging (rs-fMRI) studies frequently applied the spatial normalization on fMRI time series before the calculation of temporal features (here referred to as “Prenorm”). We hypothesized that calculating the rs-fMRI features, for example, functional connectivity (FC), regional homogeneity (ReHo), or amplitude of low-frequency fluctuation (ALFF) in individual space, before the spatial normalization (referred to as “Postnorm”) can be an improvement to avoid artifacts and increase the results’ reliability. We utilized two datasets: (1) simulated images where temporal signal-to-noise ratio (tSNR) is kept a constant and (2) an empirical fMRI dataset with 50 healthy young subjects. For simulated images, the tSNR is constant as generated in individual space but increased after Prenorm and intersubject variability of tSNR was induced. In contrast, tSNR was kept constant after Postnorm. Consistently, for empirical images, higher tSNR, ReHo, and FC (default mode network, seed in precuneus) and lower ALFF were found after Prenorm compared to those of Postnorm. Coefficient of variability of tSNR and ALFF was higher after Prenorm compared to those of Postnorm. Moreover, the significant correlation was found between simulated tSNR after Prenorm and empirical tSNR, ALFF, and ReHo after Prenorm, indicating algorithmic variation in empirical rs-fMRI features. Furthermore, comparing to Prenorm, ALFF and ReHo showed higher intraclass correlation coefficients between two serial scans after Postnorm. Our results indicated that Prenorm may induce algorithmic intersubject variability on tSNR and reduce its reliability, which also significantly affected ALFF and ReHo. We suggest using Postnorm instead of Prenorm for future rs-fMRI studies using ALFF/ReHo.
Facebook
TwitterMaize is a globally important food and feed crop, and a low-phosphate (Pi) supply in the soil frequently limits maize yield in many areas. MicroRNAs (miRNAs) play important roles in the development and adaptation of plants to the environment. In this study, the spatio-temporal miRNA transcript profiling of the maize inbred line Q319 root and leaf in response to low Pi was analyzed with high-throughput sequencing technologies, and the expression patterns of certain target genes were detected by real-time RT-PCR. Complex small RNA populations were detected after low-Pi culture and displayed different patterns in the root and leaf. miRNAs identified as responding to Pi deficiency can be grouped into ‘early’ miRNAs that respond rapidly, and often non-specifically, to Pi deficiency, and ‘late’ miRNAs that alter the morphology, physiology or metabolism of plants upon prolonged Pi deficiency. The miR827-Nitrogen limitation adaptation (NLA)-mediated post-transcriptional pathway was conserved in response to Pi availability of maize, but the miR399-mediated post-transcriptional pathway was different from Arabidopsis. Abiotic stress-related miRNAs engaged in interactions of different signaling and/or metabolic pathways. Auxin-related miRNAs (zma-miR393, zma-miR160a/b/c, zma-miR160d/e/g, zma-miR167a/b/c/d and zma-miR164a/b/c/d/g) and their targets play important roles in promoting primary root growth, inhibiting lateral root development and retarding upland growth of maize when subjected to low Pi. The changes in expression of miRNAs and their target genes suggest that the miRNA regulation/alterations compose an important mechanism in the adaptation of maize to a low-Pi environment; certain miRNAs participate in root architecture modification via the regulation of auxin signaling. A complex regulatory mechanism of miRNAs in response to a low-Pi environment exists in maize, revealing obvious differences from that in Arabidopsis. Maize (Zea mays L.) inbred line Q319 was used in this study. Seeds of the maize inbred line Q319 were surface sterilized and held at 28°C in darkness. Seedlings (4 days old) were transferred to a sufficient phosphate (SP, 1,000 μM KH2PO4) solution (Ca(NO3)2.4H2O 2 mM, NH4NO3 1.25 mM, KCl 0.1 mM, K2SO4 0.65 mM, MgSO4 0.65 mM, H3BO3 10.0 mM, (NH4)6Mo7O24 0.5 mM, MnSO4 1.0 mM, CuSO4.5H2O 0.1 mM, ZnSO4.7H2O 1.0 mM, Fe-EDTA 0.1 mM), allowed to grow for 4 days (plants with 2–3 leaves). After 2-3 days of re-culturing in SP nutrient solutions, half of the seedlings were transplanted into a low phosphate (LP, same composition as the SP solution, except that 5 μM KH2PO4 and 1 mM KH2PO4 were replaced with 1 mM KCl) nutrient solution. The plants were grown under a 32°C/25°C (day/night) temperature regime at a photon flux density (PFD) of 700 μmol m-2 s-1 with a 14 h/10 h light/dark cycle in a greenhouse with approximately 65% relative humidity. The roots and leaves were then harvested at 0, 1, 2, 4, 8 days and 8 days and cultured in SP solution (as a normal growth control) for small RNA analysis. The samples were frozen in liquid nitrogen immediately and stored at -80℃ for further analysis. Each biological repeat contains segments from 15~20 plants. Total RNA was extracted as previously described in Molecular Cloning (Sambrook and Russell David, 1989) and was then subjected to two additional chloroform washes prior to nucleic acid precipitation. The small RNA digitalization analysis based on HiSeq high-throughput sequencing takes the SBS-sequencing by synthesis. Then the 50nt sequence tags from HiSeq sequencing will go through the data cleaning first, which includes getting rid of the low quality tags and several kinds of contaminants from the 50nt tags. Length distribution of clean tags are then summarized. Afterwards, the standard bioinformatics analysis will annotate the clean tags into different categories and take those which can not be annotated to any category to predict the novel miRNA and base edit of potential known miRNA. Compare the known miRNA expression between two samples to find out the differentially expressed miRNA. The procedures are shown as below: (1)Normalize the expression of miRNA in two samples (control and treatment) to get the expression of transcript per million (TPM). Normalization formula: Normalized expression = Actual miRNA count/Total count of clean reads*1000000 (2)Calculate fold-change and P-value from the normalized expression. Then generate the log2ratio plot and scatter plot.
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from the Marine Strategy Framework Directive (MSFD) monitoring surveys.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, ITA, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterThis visualization product displays the cigarette related items abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations without UNEP-MARLIN data.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Selection of cigarette related items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata); - Exclusion of surveys without associated length; - Exclusion of surveys referring to the UNEP-MARLIN list: the UNEP-MARLIN protocol differs from the other types of monitoring in that cigarette butts are surveyed in a 10m square. To avoid comparing abundances from very different protocols, the choice has been made to distinguish in two maps the cigarette related items results associated with the UNEP-MARLIN list from the others; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of cigarette related items of the survey (normalized by 100 m) = Number of cigarette related items of the survey x (100 / survey length) Then, this normalized number of cigarette related items is summed to obtain the total normalized number of cigarette related items for each survey. Finally, the median abundance of cigarette related items for each beach and year is calculated from these normalized abundances of cigarette related items per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account cigarette related items from other sources data (excluding UNEP-MARLIN protocol) for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution-NonCommercial 2.0 (CC BY-NC 2.0)https://creativecommons.org/licenses/by-nc/2.0/
License information was derived automatically
International Journal of Social Science and Policy,
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This visualization product displays the total abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data:
- Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
- Selection of surveys from non-MSFD monitoring, cleaning and research operations;
- Exclusion of beaches without coordinates;
- Some categories & some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata).
- Exclusion of surveys without associated length;
- Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length)
Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Finally, the median abundance for each beach and year is calculated from these normalized abundances per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account other sources data for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This visualization product displays marine litter material categories percentage per year per beach from research & cleaning operations. EMODnet Chemistry included the gathering of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale. Preliminary processing were necessary to harmonize all the data : - Exclusion of OSPAR 1000 protocol, - Separation of monitoring surveys from research & cleaning operations - Exclusion of beaches with no coordinates - Normalization of survey lengths and survey numbers per year - Some categories & some litter types have been removed To calculate percentages, formula applied is : Material (%) = (total number of items (normalized at 100 m) of each material category)/(total number of items (normalized at 100 m) of all categories)*100 The material categories differ between reference lists. In order to apply a common procedure for all the surveys, the material categories have been harmonized. Eleven material categories have taken into account for this product and information on data processing and calculation are detailed in the document attached p14.
Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.