Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Shown are mean±SD. Values were extrapolated from a smaller temperature range, using linear or exponential fits (details in Materials and Methods and Table S1). 1: As defined by the onset time of the impulse response, K1. 2: Characteristic time-constant defined as: τ = (f3dB)−1. 3: Information transfer rate (Shannon's formula). 4: Information transfer rate (triple extrapolation method).
Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.
Facebook
TwitterThis dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
Facebook
TwitterWith a Monte Carlo code ACAT, we have calculated sputtering yield of fifteen fusion-relevant mono-atomic materials (Be, B, C, Al, Si, Ti, Cr, Fe, Co, Ni, Cu, Zr, Mo, W, Re) with obliquely incident light-ions H+, D+, T+,, He+) at incident energies of 50 eV to 10 keV. An improved formula for dependence of normalized sputtering yield on incident-angle has been fitted to the ACAT data normalized by the normal-incidence data to derive the best-fit values of the three physical variables included in the formula vs. incident energy. We then have found suitable functions of incident energy that fit these values most closely. The average relative difference between the normalized ACAT data and the formula with these functions has been shown to be less than 10 % in most cases and less than 20 % for the rest at the incident energies taken up for all of the combinations of the projectiles and the target materials considered. We have also compared the calculated data and the formula with available normalized experimental ones for given incident energies. The best-fit values of the parameters included in the functions have been tabulated in tables for all of the combinations for use. / Keywords: Sputtering, Erosion, Plasma-material interactions, First wall materials, Fitting formula, Monte-Carlo method, binary collision approximation, computer simulation【リソース】Fulltext
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from the Marine Strategy Framework Directive (MSFD) monitoring surveys.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations); - Exclusion of beaches without coordinates; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, ITA, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterThis visualization product displays marine macro-litter (> 2.5cm) material categories percentages per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processings were necessary to harmonize all the data: - Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring; - Selection of surveys from non-MSFD monitoring, cleaning and research operations; - Exclusion of beaches without coordinates; - Exclusion of surveys without associated length; - Some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines, the European Threshold Value for Macro Litter on Coastlines and the Joint list of litter categories for marine macro-litter monitoring from JRC (these three documents are attached to this metadata); - Exclusion of the "feaces" category: it concerns more exactly the items of dog excrements in bags of the OSPAR (item code: 121) and ITA (item code: IT59) reference lists; - Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula: Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length) Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey.
To calculate the percentage for each material category, formula applied is: Material (%) = (∑number of items (normalized at 100 m) of each material category)*100 / (∑number of items (normalized at 100 m) of all categories)
The material categories differ between reference lists (OSPAR, TSG-ML, UNEP, UNEP-MARLIN, JLIST). In order to apply a common procedure for all the surveys, the material categories have been harmonized.
More information is available in the attached documents.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes intermediate data from RiboBase that generates translation efficiency (TE). The code to generate the files can be found at https://github.com/CenikLab/TE_model.
We uploaded demo HeLa .ribo files, but due to the large storage requirements of the full dataset, I recommend contacting Dr. Can Cenik directly to request access to the complete version of RiboBase if you need the original data.
The detailed explanation for each file:
human_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in human.
human_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in human.
human_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in human.
human_TE_rho.rda: TE proportional similarity data as genes by genes matrix in human.
mouse_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in mouse.
mouse_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in mouse.
mouse_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in mouse.
mouse_TE_rho.rda: TE proportional similarity data as genes by genes matrix in mouse.
All the data was passed quality control. There are 1054 mouse samples and 835 mouse samples:
* coverage > 0.1 X
* CDS percentage > 70%
* R2 between RNA and RIBO >= 0.188 (remove outliers)
All ribosome profiling data here is non-dedup winsorizing data paired with RNA-seq dedup data without winsorizing (even though it names as flatten, it just the same format of the naming)
####code
If you need to read rda data please use load("rdaname.rda") with R
If you need to calculate proportional similarity from clr data:
library(propr)
human_TE_homo_rho <- propr:::lr2rho(as.matrix(clr_data))
rownames(human_TE_homo_rho) <- colnames(human_TE_homo_rho) <- rownames(clr_data)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This visualization product displays the total abundance of marine macro-litter (> 2.5cm) per beach per year from non-MSFD monitoring surveys, research & cleaning operations.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data:
- Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
- Selection of surveys from non-MSFD monitoring, cleaning and research operations;
- Exclusion of beaches without coordinates;
- Some categories & some litter types like organic litter, small fragments (paraffin and wax; items > 2.5cm) and pollutants have been removed. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata).
- Exclusion of surveys without associated length;
- Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of items (normalized by 100 m) = Number of litter per items x (100 / survey length)
Then, this normalized number of items is summed to obtain the total normalized number of litter for each survey. Finally, the median abundance for each beach and year is calculated from these normalized abundances per survey.
Percentiles 50, 75, 95 & 99 have been calculated taking into account other sources data for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dissolved organic matter molecular analyses were performed on a Solarix FT-ICR-MS equipped with a 15 Tesla superconducting magnet (Bruker Daltonic) using a an electrospray ionization source (Bruker Apollo II) in negative ion mode. Molecular formula calculation for all samples was performed using an Matlab (2010) routine that searches, with an error of < 0.5 ppm, for all potential combinations of elements including including the elements C∞, O∞, H∞, N = 4; S = 2 and P = 1. Combination of elements NSP, N2S, N3S, N4S, N2P, N3P, N4P, NS2, N2S2, N3S2, N4S2, S2P was not allowed. Mass peak intensities are normalized relative to the total molecular formulas in each sample according to previously published rules (Rossel et al., 2015; doi:10.1016/j.marchem.2015.07.002). The final data contained 7400 molecular formulae.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This visualization product displays the fishing & aquaculture related plastic items abundance of marine macro-litter (> 2.5cm) per beach per year from Marine Strategy Framework Directive (MSFD) monitoring surveys.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of beach litter have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB).
The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols and reference lists used on a European scale.
Preliminary processing were necessary to harmonize all the data:
- Exclusion of OSPAR 1000 protocol: in order to follow the approach of OSPAR that it is not including these data anymore in the monitoring;
- Selection of MSFD surveys only (exclusion of other monitoring, cleaning and research operations);
- Exclusion of beaches without coordinates;
- Selection of fishing and aquaculture related plastic items only. The list of selected items is attached to this metadata. This list was created using EU Marine Beach Litter Baselines and EU Threshold Value for Macro Litter on Coastlines from JRC (these two documents are attached to this metadata);
- Normalization of survey lengths to 100m & 1 survey / year: in some case, the survey length was not exactly 100m, so in order to be able to compare the abundance of litter from different beaches a normalization is applied using this formula:
Number of fishing & aquaculture related plastic items of the survey (normalized by 100 m) = Number of fishing & aquaculture related items of the survey x (100 / survey length)
Then, this normalized number of fishing & aquaculture related plastic items is summed to obtain the total normalized number of fishing & aquaculture related plastic items for each survey. Finally, the median abundance of fishing & aquaculture related plastic items for each beach and year is calculated from these normalized abundances of fishing & aquaculture related items per survey.
Sometimes the survey length was null or equal to 0. Assuming that the MSFD protocol has been applied, the length has been set at 100m in these cases.
Percentiles 50, 75, 95 & 99 have been calculated taking into account fishing & aquaculture related plastic items from MSFD data for all years.
More information is available in the attached documents.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterThis visualization product displays plastic bags density per trawl.
EMODnet Chemistry included the collection of marine litter in its 3rd phase. Since the beginning of 2018, data of seafloor litter collected by international fish-trawl surveys have been gathered and processed in the EMODnet Chemistry Marine Litter Database (MLDB). The harmonization of all the data has been the most challenging task considering the heterogeneity of the data sources, sampling protocols (OSPAR and MEDITS protocols) and reference lists used on a European scale. Moreover, within the same protocol, different gear types are deployed during fishing bottom trawl surveys.
In cases where the wingspread and/or number of items were unknown, data could not be used because these fields are needed to calculate the density. Data collected before 2011 are affected by this filter.
When the distance reported in the data was null, it was calculated from: - the ground speed and the haul duration using this formula: Distance (km) = Haul duration (h) * Ground speed (km/h); - the trawl coordinates if the ground speed and the haul duration were not filled in.
The swept area is calculated from the wingspread (which depends on the fishing gear type) and the distance trawled: Swept area (km²) = Distance (km) * Wingspread (km)
Densities have been calculated on each trawl and year using the following computation: Density of plastic bags (number of items per km²) = ∑Number of plastic bags related items / Swept area (km²)
Percentiles 50, 75, 95 & 99 have been calculated taking into account data for all years.
The list of selected items for this product is attached to this metadata. Information on data processing and calculation is detailed in the attached methodology document.
Warning: the absence of data on the map doesn't necessarily mean that they don't exist, but that no information has been entered in the Marine Litter Database for this area.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Thalassemia is an inherited blood disorder and is among the five most prevalent birth-related complications, especially in Southeast Asia. Thalassemia is classified into two main types—alpha-thalassemia and beta-thalassemia—based on the reduced or absent production of the corresponding globin chains. Over the past couple of decades, researchers have increasingly focused on the application of machine learning algorithms to medical data for identifying hidden patterns to assist in the prediction and classification of diseases and patients. To effectively analyze more complex medical data, more robust machine learning models have been developed to address various health issues. Many researchers have employed different artificial intelligence-based algorithms, i.e., Random Forest, Decision Tree, Support Vector Machine, ensemble-based classifiers, and deep neural networks to accurately detect carriers of beta-thalassemia by training on both diseased and normal test reports. While genetic testing is required by doctors for the most accurate diagnosis, a simple Complete Blood Count (CBC) report can be used to estimate the likelihood of being a beta-thalassemia carrier. Various models have successfully identified beta-thalassemia carriers using CBC data alone, but these models perform classification and prediction based on normalized data. They achieve high accuracy but at the cost of substantial changes to the dataset through class normalization. In this research, we have proposed a Dominance-based Rough Set Approach model to classify patients without balancing the classes (Normal, Abnormal), and the model achieved good performance (91% accuracy). In terms of generalization, the proposed model obtained 89% accuracy on unseen data, comparable to or better than existing approaches.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Shown are mean±SD. Values were extrapolated from a smaller temperature range, using linear or exponential fits (details in Materials and Methods and Table S1). 1: As defined by the onset time of the impulse response, K1. 2: Characteristic time-constant defined as: τ = (f3dB)−1. 3: Information transfer rate (Shannon's formula). 4: Information transfer rate (triple extrapolation method).