Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Background
The Infinium EPIC array measures the methylation status of > 850,000 CpG sites. The EPIC BeadChip uses a two-array design: Infinium Type I and Type II probes. These probe types exhibit different technical characteristics which may confound analyses. Numerous normalization and pre-processing methods have been developed to reduce probe type bias as well as other issues such as background and dye bias.
Methods
This study evaluates the performance of various normalization methods using 16 replicated samples and three metrics: absolute beta-value difference, overlap of non-replicated CpGs between replicate pairs, and effect on beta-value distributions. Additionally, we carried out Pearson’s correlation and intraclass correlation coefficient (ICC) analyses using both raw and SeSAMe 2 normalized data.
Results
The method we define as SeSAMe 2, which consists of the application of the regular SeSAMe pipeline with an additional round of QC, pOOBAH masking, was found to be the best-performing normalization method, while quantile-based methods were found to be the worst performing methods. Whole-array Pearson’s correlations were found to be high. However, in agreement with previous studies, a substantial proportion of the probes on the EPIC array showed poor reproducibility (ICC < 0.50). The majority of poor-performing probes have beta values close to either 0 or 1, and relatively low standard deviations. These results suggest that probe reliability is largely the result of limited biological variation rather than technical measurement variation. Importantly, normalizing the data with SeSAMe 2 dramatically improved ICC estimates, with the proportion of probes with ICC values > 0.50 increasing from 45.18% (raw data) to 61.35% (SeSAMe 2).
Methods
Study Participants and Samples
The whole blood samples were obtained from the Health, Well-being and Aging (Saúde, Ben-estar e Envelhecimento, SABE) study cohort. SABE is a cohort of census-withdrawn elderly from the city of São Paulo, Brazil, followed up every five years since the year 2000, with DNA first collected in 2010. Samples from 24 elderly adults were collected at two time points for a total of 48 samples. The first time point is the 2010 collection wave, performed from 2010 to 2012, and the second time point was set in 2020 in a COVID-19 monitoring project (9±0.71 years apart). The 24 individuals were 67.41±5.52 years of age (mean ± standard deviation) at time point one; and 76.41±6.17 at time point two and comprised 13 men and 11 women.
All individuals enrolled in the SABE cohort provided written consent, and the ethic protocols were approved by local and national institutional review boards COEP/FSP/USP OF.COEP/23/10, CONEP 2044/2014, CEP HIAE 1263-10, University of Toronto RIS 39685.
Blood Collection and Processing
Genomic DNA was extracted from whole peripheral blood samples collected in EDTA tubes. DNA extraction and purification followed manufacturer’s recommended protocols, using Qiagen AutoPure LS kit with Gentra automated extraction (first time point) or manual extraction (second time point), due to discontinuation of the equipment but using the same commercial reagents. DNA was quantified using Nanodrop spectrometer and diluted to 50ng/uL. To assess the reproducibility of the EPIC array, we also obtained technical replicates for 16 out of the 48 samples, for a total of 64 samples submitted for further analyses. Whole Genome Sequencing data is also available for the samples described above.
Characterization of DNA Methylation using the EPIC array
Approximately 1,000ng of human genomic DNA was used for bisulphite conversion. Methylation status was evaluated using the MethylationEPIC array at The Centre for Applied Genomics (TCAG, Hospital for Sick Children, Toronto, Ontario, Canada), following protocols recommended by Illumina (San Diego, California, USA).
Processing and Analysis of DNA Methylation Data
The R/Bioconductor packages Meffil (version 1.1.0), RnBeads (version 2.6.0), minfi (version 1.34.0) and wateRmelon (version 1.32.0) were used to import, process and perform quality control (QC) analyses on the methylation data. Starting with the 64 samples, we first used Meffil to infer the sex of the 64 samples and compared the inferred sex to reported sex. Utilizing the 59 SNP probes that are available as part of the EPIC array, we calculated concordance between the methylation intensities of the samples and the corresponding genotype calls extracted from their WGS data. We then performed comprehensive sample-level and probe-level QC using the RnBeads QC pipeline. Specifically, we (1) removed probes if their target sequences overlap with a SNP at any base, (2) removed known cross-reactive probes (3) used the iterative Greedycut algorithm to filter out samples and probes, using a detection p-value threshold of 0.01 and (4) removed probes if more than 5% of the samples having a missing value. Since RnBeads does not have a function to perform probe filtering based on bead number, we used the wateRmelon package to extract bead numbers from the IDAT files and calculated the proportion of samples with bead number < 3. Probes with more than 5% of samples having low bead number (< 3) were removed. For the comparison of normalization methods, we also computed detection p-values using out-of-band probes empirical distribution with the pOOBAH() function in the SeSAMe (version 1.14.2) R package, with a p-value threshold of 0.05, and the combine.neg parameter set to TRUE. In the scenario where pOOBAH filtering was carried out, it was done in parallel with the previously mentioned QC steps, and the resulting probes flagged in both analyses were combined and removed from the data.
Normalization Methods Evaluated
The normalization methods compared in this study were implemented using different R/Bioconductor packages and are summarized in Figure 1. All data was read into R workspace as RG Channel Sets using minfi’s read.metharray.exp() function. One sample that was flagged during QC was removed, and further normalization steps were carried out in the remaining set of 63 samples. Prior to all normalizations with minfi, probes that did not pass QC were removed. Noob, SWAN, Quantile, Funnorm and Illumina normalizations were implemented using minfi. BMIQ normalization was implemented with ChAMP (version 2.26.0), using as input Raw data produced by minfi’s preprocessRaw() function. In the combination of Noob with BMIQ (Noob+BMIQ), BMIQ normalization was carried out using as input minfi’s Noob normalized data. Noob normalization was also implemented with SeSAMe, using a nonlinear dye bias correction. For SeSAMe normalization, two scenarios were tested. For both, the inputs were unmasked SigDF Sets converted from minfi’s RG Channel Sets. In the first, which we call “SeSAMe 1”, SeSAMe’s pOOBAH masking was not executed, and the only probes filtered out of the dataset prior to normalization were the ones that did not pass QC in the previous analyses. In the second scenario, which we call “SeSAMe 2”, pOOBAH masking was carried out in the unfiltered dataset, and masked probes were removed. This removal was followed by further removal of probes that did not pass previous QC, and that had not been removed by pOOBAH. Therefore, SeSAMe 2 has two rounds of probe removal. Noob normalization with nonlinear dye bias correction was then carried out in the filtered dataset. Methods were then compared by subsetting the 16 replicated samples and evaluating the effects that the different normalization methods had in the absolute difference of beta values (|β|) between replicated samples.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Normalization of RNA-Seq data has proven essential to ensure accurate inferences and replication of findings. Hence, various normalization methods have been proposed for various technical artifacts that can be present in high-throughput sequencing transcriptomic studies. In this study, we set out to compare the widely used library size normalization methods (UQ, TMM, and RLE) and across sample normalization methods (SVA, RUV, and PCA) for RNA-Seq data using publicly available data from The Cancer Genome Atlas (TCGA) cervical cancer study. Additionally, an extensive simulation study was completed to compare the performance of the across sample normalization methods in estimating technical artifacts. Lastly, we investigated the effect of reduction in degrees of freedom in the normalized data and their impact on downstream differential expression analysis results. Based on this study, the TMM and RLE library size normalization methods give similar results for CESC dataset. In addition, the simulated datasets results show that the SVA (“BE”) method outperforms the other methods (SVA “Leek”, PCA) by correctly estimating the number of latent artifacts. Moreover, ignoring the loss of degrees of freedom due to normalization results in an inflated type I error rates. We recommend adjusting not only for library size differences but also the assessment of known and unknown technical artifacts in the data, and if needed, complete across sample normalization. In addition, we suggest that one includes the known and estimated latent artifacts in the design matrix to correctly account for the loss in degrees of freedom, as opposed to completing the analysis on the post-processed normalized data.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This Hospital Management System project features a fully normalized relational database designed to manage hospital data including patients, doctors, appointments, diagnoses, medications, and billing. The schema applies database normalization (1NF, 2NF, 3NF) to reduce redundancy and maintain data integrity, providing an efficient, scalable structure for healthcare data management. Included are SQL scripts to create tables and insert sample data, making it a useful resource for learning practical database design and normalization in a healthcare context.
Facebook
TwitterTable S1 and Figures S1–S6. Table S1. List of primers. Forward and reverse primers used for qPCR. Figure S1. Changes in total and polyA+ RNA during development. a) Amount of total RNA per embryo at different developmental stages. b) Amount of polyA+ RNA per 100 embryos at different developmental stages. Vertical bars represent standard errors. Figure S2. The TMM scaling factor. a) The TMM scaling factor estimated using dataset 1 and 2. We observe very similar values. b) The TMM scaling factor obtained using the replicates in dataset 2. The TMM values are very reproducible. c) The TMM scale factor when RNA-seq data based on total RNA was used. Figure S3. Comparison of scales. We either square-root transformed or used that scales directly and compared the normalized fold-changes to RT-qPCR results. a) Transcripts with dynamic change pre-ZGA. b) Transcripts with decreased abundance post-ZGA. c) Transcripts with increased expression post-ZGA. Vertical bars represent standard deviations. Figure S4. Comparison of RT-qPCR results depending on RNA template (total or poly+ RNA) and primers (random or oligo(dT) primers) for setd3 (a), gtf2e2 (b) and yy1a (c). The increase pre-ZGA is dependent on template (setd3 and gtf2e2) and not primer type. Figure S5. Efficiency calibrated fold-changes for a subset of transcripts. Vertical bars represent standard deviations. Figure S6. Comparison normalization methods using dataset 2 for transcripts with decreased expression post-ZGA (a) and increased expression post-ZGA (b). Vertical bars represent standard deviations. (PDF)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FPKM normalized data from whole transcriptome sequencing of corpus luteum tissue from lactating holstein cows in the following physiologic states: late luteal phase (control), early regression, late regression, first month pregnancy (day 20), second month pregnancy (day 55+/-3 days)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Dataset for human osteoarthritis (OA) — microarray gene expression (Affymetrix GPL570) PMC +1
Contains expression data for 7 healthy control (normal) tissue samples and 7 osteoarthritis patient tissue samples from synovial / joint tissue. PMC +1
Pre-processed for normalization (background correction, log-transformation, normalization) to remove technical variation.
Suitable for downstream analyses: differential gene expression (normal vs OA), subtype- or phenotype-based classification, machine learning.
Can act as a validation dataset when combining with other GEO datasets to increase sample size or test reproducibility. SpringerLink +1
Useful for biomarker discovery, pathway enrichment analysis (e.g., GO, KEGG), immune infiltration analysis, and subtype analysis in osteoarthritis research.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Facebook
TwitterGamification is a strategy to stimulate social and human factors (SHF) that influence software development productivity. However, software development teams must improve their productivity to face the challenges of software development organizations. Traditionally, productivity analysis only includes technical factors. Literature shows the importance of SHFs in productivity. Furthermore, gamification elements can contribute to enhancing such factors to improve performance. Thus, to design strategies to enhance a specific SHF, it is essential to identify how gamification elements are related to these factors. The objective of this research is to determine the relationship between gamification elements and SHF that influence the productivity of software development teams. This research included the design of a scoring template to collect data from the experts. The importance was calculated using the Simple Additive Weighting (SAW) method as a tool framed in decision theory. Three criteria were considered: cumulative score, matches in inclusion, and values. The relationships of importance serve as a reference value in designing gamification strategies that promote improved productivity. It extends the path toward analyzing the effect of gamification on the productivity of software development. This relationship facilitates designing and implementing gamification strategies to improve productivity.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundGene expression analysis is an essential part of biological and medical investigations. Quantitative real-time PCR (qPCR) is characterized with excellent sensitivity, dynamic range, reproducibility and is still regarded to be the gold standard for quantifying transcripts abundance. Parallelization of qPCR such as by microfluidic Taqman Fluidigm Biomark Platform enables evaluation of multiple transcripts in samples treated under various conditions. Despite advanced technologies, correct evaluation of the measurements remains challenging. Most widely used methods for evaluating or calculating gene expression data include geNorm and ΔΔCt, respectively. They rely on one or several stable reference genes (RGs) for normalization, thus potentially causing biased results. We therefore applied multivariable regression with a tailored error model to overcome the necessity of stable RGs.ResultsWe developed a RG independent data normalization approach based on a tailored linear error model for parallel qPCR data, called LEMming. It uses the assumption that the mean Ct values within samples of similarly treated groups are equal. Performance of LEMming was evaluated in three data sets with different stability patterns of RGs and compared to the results of geNorm normalization. Data set 1 showed that both methods gave similar results if stable RGs are available. Data set 2 included RGs which are stable according to geNorm criteria, but became differentially expressed in normalized data evaluated by a t-test. geNorm-normalized data showed an effect of a shifted mean per gene per condition whereas LEMming-normalized data did not. Comparing the decrease of standard deviation from raw data to geNorm and to LEMming, the latter was superior. In data set 3 according to geNorm calculated average expression stability and pairwise variation, stable RGs were available, but t-tests of raw data contradicted this. Normalization with RGs resulted in distorted data contradicting literature, while LEMming normalized data did not.ConclusionsIf RGs are coexpressed but are not independent of the experimental conditions the stability criteria based on inter- and intragroup variation fail. The linear error model developed, LEMming, overcomes the dependency of using RGs for parallel qPCR measurements, besides resolving biases of both technical and biological nature in qPCR. However, to distinguish systematic errors per treated group from a global treatment effect an additional measurement is needed. Quantification of total cDNA content per sample helps to identify systematic errors.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Raw and preprocessed microarray expression data from the GSE65194 cohort.
Includes samples from triple-negative breast cancer (TNBC), other breast cancer subtypes, and normal breast tissues.
Expression profiles generated using the “Affymetrix Human Genome U133 Plus 2.0 Array (GPL570)” platform. tcr.amegroups.org +2 Journal of Cancer +2
Provides normalized gene expression values suitable for downstream analyses such as differential expression, subtype classification, and clustering.
Supports the identification of differentially expressed genes (DEGs) between TNBC, non-TNBC subtypes, and normal tissue. Aging-US +2 tcr.amegroups.org +2
Useful for transcriptomic analyses in breast cancer research, including subtype analysis, biomarker discovery, and comparative studies.
Facebook
Twitter
According to our latest research, the global Tick Data Normalization market size reached USD 1.02 billion in 2024, reflecting robust expansion driven by the increasing complexity and volume of financial market data. The market is expected to grow at a CAGR of 13.1% during the forecast period, reaching approximately USD 2.70 billion by 2033. This growth is fueled by the rising adoption of algorithmic trading, regulatory demands for accurate and consistent data, and the proliferation of advanced analytics across financial institutions. As per our analysis, the market’s trajectory underscores the critical role of data normalization in ensuring data integrity and operational efficiency in global financial markets.
The primary growth driver for the tick data normalization market is the exponential surge in financial data generated by modern trading platforms and electronic exchanges. With the proliferation of high-frequency trading and the integration of diverse market data feeds, financial institutions face the challenge of processing vast amounts of tick-by-tick data from multiple sources, each with unique formats and structures. Tick data normalization solutions address this complexity by transforming disparate data streams into consistent, standardized formats, enabling seamless downstream processing for analytics, trading algorithms, and compliance reporting. This standardization is particularly vital in the context of regulatory mandates such as MiFID II and Dodd-Frank, which require accurate data lineage and auditability, further propelling market growth.
Another significant factor contributing to market expansion is the growing reliance on advanced analytics and artificial intelligence within the financial sector. As firms seek to extract actionable insights from historical and real-time tick data, the need for high-quality, normalized datasets becomes paramount. Data normalization not only enhances the accuracy and reliability of predictive models but also facilitates the integration of machine learning algorithms for tasks such as anomaly detection, risk assessment, and portfolio optimization. The increasing sophistication of trading strategies, coupled with the demand for rapid, data-driven decision-making, is expected to sustain robust demand for tick data normalization solutions across asset classes and geographies.
Furthermore, the transition to cloud-based infrastructure has transformed the operational landscape for banks, hedge funds, and asset managers. Cloud deployment offers scalability, flexibility, and cost-efficiency, enabling firms to manage large-scale tick data normalization workloads without the constraints of on-premises hardware. This shift is particularly relevant for smaller institutions and emerging markets, where cloud adoption lowers entry barriers and accelerates the deployment of advanced data management capabilities. At the same time, the availability of managed services and API-driven platforms is fostering innovation and expanding the addressable market, as organizations seek to outsource complex data normalization tasks to specialized vendors.
Regionally, North America continues to dominate the tick data normalization market, accounting for the largest share in terms of revenue and technology adoption. The presence of leading financial centers, advanced IT infrastructure, and a strong regulatory framework underpin the region’s leadership. Meanwhile, Asia Pacific is emerging as the fastest-growing market, driven by rapid digitalization of financial services, burgeoning capital markets, and increasing participation of retail and institutional investors. Europe also maintains a significant market presence, supported by stringent compliance requirements and a mature financial ecosystem. Latin America and the Middle East & Africa are witnessing steady growth, albeit from a lower base, as financial modernization initiatives gain momentum.
The tick data normalizati
Facebook
Twitter(1) qPCR Gene Expression Data The THP-1 cell line was sub-cloned and one clone (#5) was selected for its ability to differentiate relatively homogeneously in response to phorbol 12-myristate-13-acetate (PMA) (Sigma). THP-1.5 was used for all subsequent experiments. THP-1.5 cells were cultured in RPMI, 10% FBS, Penicillin/Streptomycin, 10mM HEPES, 1mM Sodium Pyruvate, 50uM 2-Mercaptoethanol. THP-1.5 were treated with 30ng/ml PMA over a time-course of 96h. Total cell lysates were harvested in TRIzol reagent at 1, 2, 4, 6, 12, 24, 48, 72, 96 hours, including an undifferentiated control. Undifferentiated cells were harvested in TRIzol reagent at the beginning of the LPS time-course. One biological replicate was prepared for each time point. Total RNA was purified from TRIzol lysates according to manufacturer’s instructions. Genespecific primer pairs were designed using Primer3 software, with an optimal primer size of 20 bases, amplification size of 140bp, and annealing temperature of 60°C. Primer sequences were designed for 2,396 candidate genes including four potential controls: GAPDH, beta actin (ACTB), beta-2-microglobulin (B2M), phosphoglycerate kinase 1 (PGK1). The RNA samples were reverse transcribed to produce cDNA and then subjected to quantitative PCR using SYBR Green (Molecular Probes) using the ABI Prism 7900HT system (Applied Biosystems, Foster City, CA, USA) with a 384-well amplification plate; genes for each sample were assayed in triplicate. Reactions were carried out in 20μL volumes in 384-well plates; each reaction contained: 0.5 U of HotStar Taq DNA polymerase (Qiagen) and the manufacturer’s 1× amplification buffer adjusted to a final concentration of 1mM MgCl2, 160μM dNTPs, 1/38000 SYBR Green I (Molecular Probes), 7% DMSO, 0.4% ROX Reference Dye (Invitrogen), 300 nM of each primer (forward and reverse), and 2μL of 40-fold diluted first-strand cDNA synthesis reaction mixture (12.5ng total RNA equivalent). Polymerase activation at 95ºC for 15 min was followed by 40 cycles of 15 s at 94ºC, 30 s at 60ºC, and 30 s at 72ºC. The dissociation curve analysis, which evaluates each PCR product to be amplified from single cDNA, was carried out in accordance with the manufacturer’s protocol. Expression levels were reported as Ct values. The large number of genes assayed and the replicates measures required that samples be distributed across multiple amplification plates, with an average of twelve plates per sample. Because it was envisioned that GAPDH would serve as a single-gene normalization control, this gene was included on each plate. All primer pairs were replicated in triplicates. Raw qPCR expression measures were quantified using Applied Biosystems SDS software and reported as Ct values. The Ct value represents the number of cycles or rounds of amplification required for the fluorescence of a gene or primer pair to surpass an arbitrary threshold. The magnitude of the Ct value is inversely proportional to the expression level so that a gene expressed at a high level will have a low Ct value and vice versa. Replicate Ct values were combined by averaging, with additional quality control constraints imposed by a standard filtering method developed by the RIKEN group for the preprocessing of their qPCR data. Briefly this method entails: 1. Sort the triplicate Ct values in ascending order: Ct1, Ct2, Ct3. Calculate differences between consecutive Ct values: difference1 = Ct2 – Ct1 and difference2 = Ct3 – Ct2. 2. Four regions are defined (where Region4 overrides the other regions): Region1: difference ≦ 0.2, Region2: 0.2 < difference ≦ 1.0, Region3: 1.0 < difference, Region4: one of the Ct values in the difference calculation is 40 If difference1 and difference2 fall in the same region, then the three replicate Ct values are averaged to give a final representative measure. If difference1 and difference2 are in different regions, then the two replicate Ct values that are in the small number region are averaged instead. This particular filtering method is specific to the data set we used here and does not represent a part of the normalization procedure itself; Alternate methods of filtering can be applied if appropriate prior to normalization. Moreover while the presentation in this manuscript has used Ct values as an example, any measure of transcript abundance, including those corrected for primer efficiency can be used as input to our data-driven methods. (2) Quantile Normalization Algorithm Quantile normalization proceeds in two stages. First, if samples are distributed across multiple plates, normalization is applied to all of the genes assayed for each sample to remove plate-to-plate effects by enforcing the same quantile distribution on each plate. Then, an overall quantile normalization is applied between samples, assuring that each sample has the same distribution of expression values as all of the other samples to be compared. A similar approach using quantile ormalization has been previously described in the context of microarray normalization. Briefly, our method entails the following steps: i) qPCR data from a single RNA sample are stored in a matrix M of dimension k (maximum number of genes or primer pairs on a plate) rows by p (number of plates) columns. Plates with differing numbers of genes are made equivalent by padded plates with missing values to constrain M to a rectangular structure. ii) Each column is sorted into ascending order and stored in matrix M’. The sorted columns correspond to the quantile distribution of each plate. The missing values are placed at the end of each ordered column. All calculations in quantile normalization are performed on non-missing values. iii) The average quantile distribution is calculated by taking the average of each row in M’. Each column in M’ is replaced by this average quantile distribution and rearranged to have the same ordering as the original row order in M. This gives the within-sample normalized data from one RNA sample. iv) Steps analogous to 1 – 3 are repeated for each sample. Between-sample normalization is performed by storing the within-normalized data as a new matrix N of dimension k (total number of genes, in our example k = 2,396) rows by n (number of samples) columns. Steps 2 and 3 are then applied to this matrix. (3) Rank-Invariant Set Normalization Algorithm We describe an extension of this method for use on qPCR data with any number of experimental conditions or samples in which we identify a set of stably-expressed genes from within the measured expression data and then use these to adjust expression between samples. Briefly, i) qPCR data from all samples are stored in matrix R of dimension g (total number of genes or primer pairs used for all plates) rows by s (total number of samples). ii) We first select gene sets that are rank-invariant across a single sample compared to a common reference. The reference may be chosen in a variety of ways, depending on the experimental design and aims of the experiment. As described in Tseng et al., the reference may be designated as a particular sample from the experiment (e.g. time zero in a time course experiment), the average or median of all samples, or selecting the sample which is closest to the average or median of all samples. Genes are considered to be rank-invariant if they retain their ordering or rank with respect to expression across the experimental sample versus the common reference sample. We collect sets of rank-invariant genes for all of the s pairwise comparisons, relative to a common reference. We take the intersection of all s sets to obtain the final set of rank-invariant genes that is used for normalization. iii) Let αj represent the average expression value of the rank-invariant genes in sample j. (α1, …, αs) then represents the vector of rank-invariant average expression values for all conditions 1 to s iv) We calculate the scale f The THP-1 cell line was sub-cloned and one clone (#5) was selected for its ability to differentiate relatively homogeneously in response to phorbol 12-myristate-13-acetate (PMA) (Sigma). THP-1.5 was used for all subsequent experiments. THP-1.5 cells were cultured in RPMI, 10% FBS, Penicillin/Streptomycin, 10mM HEPES, 1mM Sodium Pyruvate, 50uM 2-Mercaptoethanol. THP-1.5 were treated with 30ng/ml PMA over a time-course of 96h. Total cell lysates were harvested in TRIzol reagent at 1, 2, 4, 6, 12, 24, 48, 72, 96 hours, including an undifferentiated control. Total RNA was purifed from TRIzol lysates according to manufacturer’s instructions. The RNA samples were reverse transcribed to produce cDNA and then subjected to quantitative PCR using SYBR Green (Molecular Probes) using the ABI Prism 7900HT system (Applied Biosystems, Foster City, CA,USA) with a 384-well amplification plate; genes for each sample were assayed in triplicate.
Facebook
TwitterForaminiferal samples were collected from Chincoteague Bay, Newport Bay, and Tom’s Cove as well as the marshes on the back-barrier side of Assateague Island and the Delmarva (Delaware-Maryland-Virginia) mainland by U.S. Geological Survey (USGS) researchers from the St. Petersburg Coastal and Marine Science Center in March, April (14CTB01), and October (14CTB02) 2014. Samples were also collected by the Woods Hole Coastal and Marine Science Center (WHCMSC) in July 2014 and shipped to the St. Petersburg office for processing. The dataset includes raw foraminiferal and normalized counts for the estuarine grab samples (G), terrestrial surface samples (S), and inner shelf grab samples (G). For further information regarding data collection and sample site coordinates, processing methods, or related datasets, please refer to USGS Data Series 1060 (https://doi.org/10.3133/ds1060), USGS Open-File Report 2015–1219 (https://doi.org/10.3133/ofr20151219), and USGS Open-File Report 2015-1169 (https://doi.org/10.3133/ofr20151169). Downloadable data are available as Excel spreadsheets, comma-separated values text files, and formal Federal Geographic Data Committee metadata.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
EDTA blood samples were collected from participants, cells were centrifuged, and supernatants and pellets were stored at -80 o C until analysis. Plasma was centrifuged for 15 min at 2200 x g, and 60 ul of supernatant was used for used for the SOMAscan assay performed by SomaLogic, Boulder, CO as described previously (Gold et al., 2010;Han et al., 2018;Tin et al., 2019;Yang et al., 2020). Raw signals were then normalized as described (Gold et al., 2010;Han et al., 2018). These steps include hybridization normalization, plate scaling and calibration, and the adaptive normalization by maximum likelihood (ANML), which normalizes SomaScan EDTA plasma measurements to a healthy U.S. population reference, and then log2 transformed, (resulting in data file: sbst3_norm_SIG_Somalogic_UTHSC_2021_291122.txt.
Facebook
TwitterEver-increasing affordability of next-generation sequencing makes whole-metagenome sequencing an attractive alternative to traditional 16S rDNA, RFLP, or culturing approaches for the analysis of microbiome samples. The advantage of whole-metagenome sequencing is that it allows direct inference of the metabolic capacity and physiological features of the studied metagenome without reliance on the knowledge of genotypes and phenotypes of the members of the bacterial community. It also makes it possible to overcome problems of 16S rDNA sequencing, such as unknown copy number of the 16S gene and lack of sufficient sequence similarity of the “universal” 16S primers to some of the target 16S genes. On the other hand, next-generation sequencing suffers from biases resulting in non-uniform coverage of the sequenced genomes. To overcome this difficulty, we present a model of GC-bias in sequencing metagenomic samples as well as filtration and normalization techniques necessary for accurate quantification of microbial organisms. While there has been substantial research in normalization and filtration of read-count data in such techniques as RNA-seq or Chip-seq, to our knowledge, this has not been the case for the field of whole-metagenome shotgun sequencing. The presented methods assume that complete genome references are available for most microorganisms of interest present in metagenomic samples. This is often a valid assumption in such fields as medical diagnostics of patient microbiota. Testing the model on two validation datasets showed four-fold reduction in root-mean-square error compared to non-normalized data in both cases. The presented methods can be applied to any pipeline for whole metagenome sequencing analysis relying on complete microbial genome references. We demonstrate that such pre-processing reduces the number of false positive hits and increases accuracy of abundance estimates.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The CSV data was sourced from the existing Kaggle dataset titled "Adventure Works 2022" by Algorismus. This data was normalized and consisted of seven individual CSV files. The Sales table served as a fact table that connected to other dimensions. To consolidate all the data into a single table, it was loaded into a SQLite database and transformed accordingly. The final denormalized table was then exported as a single CSV file (delimited by | ), and the column names were updated to follow snake_case style.
doi.org/10.6084/m9.figshare.27899706
| Column Name | Description |
|---|---|
| sales_order_number | Unique identifier for each sales order. |
| sales_order_date | The date and time when the sales order was placed. (e.g., Friday, August 25, 2017) |
| sales_order_date_day_of_week | The day of the week when the sales order was placed (e.g., Monday, Tuesday). |
| sales_order_date_month | The month when the sales order was placed (e.g., January, February). |
| sales_order_date_day | The day of the month when the sales order was placed (1-31). |
| sales_order_date_year | The year when the sales order was placed (e.g., 2022). |
| quantity | The number of units sold in the sales order. |
| unit_price | The price per unit of the product sold. |
| total_sales | The total sales amount for the sales order (quantity * unit price). |
| cost | The total cost associated with the products sold in the sales order. |
| product_key | Unique identifier for the product sold. |
| product_name | The name of the product sold. |
| reseller_key | Unique identifier for the reseller. |
| reseller_name | The name of the reseller. |
| reseller_business_type | The type of business of the reseller (e.g., Warehouse, Value Reseller, Specialty Bike Shop). |
| reseller_city | The city where the reseller is located. |
| reseller_state | The state where the reseller is located. |
| reseller_country | The country where the reseller is located. |
| employee_key | Unique identifier for the employee associated with the sales order. |
| employee_id | The ID of the employee who processed the sales order. |
| salesperson_fullname | The full name of the salesperson associated with the sales order. |
| salesperson_title | The title of the salesperson (e.g., North American Sales Manager, Sales Representative). |
| email_address | The email address of the salesperson. |
| sales_territory_key | Unique identifier for the sales territory for the actual sale. (e.g. 3) |
| assigned_sales_territory | List of sales_territory_key separated by comma assigned to the salesperson. (e.g., 3,4) |
| sales_territory_region | The region of the sales territory. US territory broken down in regions. International regions listed as country name (e.g., Northeast, France). |
| sales_territory_country | The country associated with the sales territory. |
| sales_territory_group | The group classification of the sales territory. (e.g., Europe, North America, Pacific) |
| target | The ... |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of bulk RNA sequencing (RNA-Seq) data is a valuable tool to understand transcription at the genome scale. Targeted sequencing of RNA has emerged as a practical means of assessing the majority of the transcriptomic space with less reliance on large resources for consumables and bioinformatics. TempO-Seq is a templated, multiplexed RNA-Seq platform that interrogates a panel of sentinel genes representative of genome-wide transcription. Nuances of the technology require proper preprocessing of the data. Various methods have been proposed and compared for normalizing bulk RNA-Seq data, but there has been little to no investigation of how the methods perform on TempO-Seq data. We simulated count data into two groups (treated vs. untreated) at seven-fold change (FC) levels (including no change) using control samples from human HepaRG cells run on TempO-Seq and normalized the data using seven normalization methods. Upper Quartile (UQ) performed the best with regard to maintaining FC levels as detected by a limma contrast between treated vs. untreated groups. For all FC levels, specificity of the UQ normalization was greater than 0.84 and sensitivity greater than 0.90 except for the no change and +1.5 levels. Furthermore, K-means clustering of the simulated genes normalized by UQ agreed the most with the FC assignments [adjusted Rand index (ARI) = 0.67]. Despite having an assumption of the majority of genes being unchanged, the DESeq2 scaling factors normalization method performed reasonably well as did simple normalization procedures counts per million (CPM) and total counts (TCs). These results suggest that for two class comparisons of TempO-Seq data, UQ, CPM, TC, or DESeq2 normalization should provide reasonably reliable results at absolute FC levels ≥2.0. These findings will help guide researchers to normalize TempO-Seq gene expression data for more reliable results.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises a professions gazetteer generated with automatically extracted terminology from the Mesinesp2 corpus, a manually annotated corpus in which domain experts have labeled a set of scientific literature, clinical trials, and patent abstracts, as well as clinical case reports.
A silver gazetteer for mention classification and normalization is created combining the predictions of automatic Named Entity Recognition models and normalization using Entity Linking to three controlled vocabularies SNOMED CT, NCBI and ESCO. The sources are 265,025 different documents, where 249,538 correspond to MESINESP2 Corpora and 15,487 to clinical cases from open clinical journals. From them, 5,682,000 mentions are extracted and 4,909,966 (86.42%) are normalized to any of the ontologies: SNOMED CT (4,909,966) for diseases, symptoms, drugs, locations, occupations, procedures and species; ESCO (215,140) for occupations; and NCBI (1,469,256) for species.
The repository contains a .tsv file with the following columns:
filenameid: A unique identifier combining the file name and mention span within the text. This ensures each extracted mention is uniquely traceable. Example: biblio-1000005#239#256 refers to a mention spanning characters 239–256 in the file with the name biblio-1000005.
span: The specific text span (mention) extracted from the document, representing a term or phrase identified in the dataset. Example: centro oncológico.
source: The origin of the document, indicating the corpus from which the mention was extracted. Possible values: mesinesp2, clinical_cases.
filename: The name of the file from which the mention was extracted. Example: biblio-1000005.
mention_class: Categories or semantic tags assigned to the mention, describing its type or context in the text. Example: ['ENFERMEDAD', 'SINTOMA'].
codes_esco: The normalized ontology codes from the European Skills, Competences, Qualifications, and Occupations (ESCO) vocabulary for the identified mention (if applicable). This field may be empty if no ESCO mapping exists. Example: 30629002.
terms_esco: The human-readable terms from the ESCO ontology corresponding to the codes_esco. Example: ['responsable de recursos', 'director de recursos', 'directora de recursos'].
codes_ncbi: The normalized ontology codes from the NCBI Taxonomy vocabulary for species (if applicable). This field may be empty if no NCBI mapping exists.
terms_ncbi: The human-readable terms from the NCBI Taxonomy vocabulary corresponding to the codes_ncbi. Example: ['Lacandoniaceae', 'Pandanaceae R.Br., 1810', 'Pandanaceae', 'Familia'].
codes_sct: The normalized ontology codes from SNOMED CT (Systematized Nomenclature of Medicine - Clinical Terms) vocabulary for diseases, symptoms, drugs, locations, occupations, procedures, and species (if applicable). Example: 22232009.
terms_sct: The human-readable terms from the SNOMED CT ontology corresponding to the codes_sct. Example: ['adjudicador de regulaciones del seguro nacional'].
sct_sem_tag: The semantic category tag assigned by SNOMED CT to describe the general classification of the mention. Example: environment.
Suggestion: If you load the dataset using python, it is recommended to read the columns containing lists as follows
import ast
df["mention_class"] = df["mention_class"].apply(lambda x: ast.literal_eval(x) if isinstance(x, str) else x)
License
This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). This means you are free to:
Share: Copy and redistribute the material in any medium or format.
Adapt: Remix, transform, and build upon the material for any purpose, even commercially.
Attribution Requirement: Please credit the dataset creators appropriately, provide a link to the license, and indicate if changes were made.
Contact
If you have any questions or suggestions, please contact us at:
Martin Krallinger ()
Additional resources and corpora
If you are interested, you might want to check out these corpora and resources:
MESINESP-2 (Corpus of manually indexed records with DeCS /MeSH terms comprising scientific literature abstracts, clinical trials, and patent abstracts, different document collection)
MEDDOPROF corpus
Codes Reference List (for MEDDOPROF-NORM)
Annotation Guidelines
Occupations Gazetteer
Facebook
Twitterhttps://www.scilifelab.se/data/restricted-access/https://www.scilifelab.se/data/restricted-access/
Dataset Description This record is a collection of Whole-genome sequencing (WGS), RNA sequencing (RNA-seq), NanoString's nCounter® Breast Cancer 360 (BC360) Panel and cell viability assay data, generated as part of the study “Breast cancer patient-derived whole-tumor cell culture model for efficient drug profiling and treatment response prediction" by Chen et al., 2022. The WGS dataset contains raw sequencing data (BAM files) from tumor scraping cells (TSCs) at the time of surgical resection, derived whole-tumor cell (WTC) cultures from each patient's specimen, and normal skin biopsy for germline control, from five (5) breast cancer (BC) patients. Genomic DNA samples were isolated by using the QIAamp DNA mini kit (QIAGEN). The library was prepared by using Illumina TruSeq PCR-free (350 bp) according to the manufacturer’s protocol. The bulk DNA samples were then sequenced by Illumina Hiseq X and processed via the Science for Life Laboratory CAW workflow version 1.2.362 (Stockholm, Sweden; https://github.com/SciLifeLab/Sarek). The RNA-seq dataset contains raw sequencing data (fastq files) from the TSC pellets at the time of surgical resection, and the pellets of derived WTC cultures with or without tamoxifen metabolites treatment (1 nM 4OHT and 25 nM Z-Endoxifen), from 16 BC patients. 2000 ng RNA was extracted using the RNeasy mini kit (QIAGEN) from each sample, and 1 μg of total RNA was used for rRNA depletion using RiboZero (Illumina). Stranded RNA-seq libraries were constructed using TruSeq Stranded Total RNA Library Prep Kit (Illumina), and paired-end sequencing was performed on HiSeq 2500 with a 2 x 126 setup using the Science for Life Laboratory platform (Stockholm, Sweden). The NanoString's nCounter® BC360 Panel dataset contains normalized data from FFPE tissue samples of 43 BC patients. RNA was extracted from the macrodissected sections using the High Pure FFPET RNA Isolation Kit (Roche) following the manufacturer's protocols. Then, 200 ng of RNA per sample were loaded and further analyzed according to the manufacturer’s recommendation on a NanoString nCounter® system using the Breast Cancer 360 code set, which is comprised of 18 housekeeping genes and 752 target genes covering key pathways in tumor biology, microenvironment, and immune response. Raw data was assessed using several quality assurance (QA) metrics to measure imaging quality, oversaturation, and overall signal-to-noise ratio. All samples satisfying QA metric checks were background corrected (background thresholding) using the negative probes and normalized with their mean minus two standard deviations. The background-corrected data were then normalized by calculating the geometric mean of five housekeeper genes, namely ACTB, MRPL19, PSMC4, RPLP0, and SF3A1. The cell viability assay dataset for the main study contains drug sensitivity score (DSS) values for each of the tested drugs derived from the WTC spheroids of 45 BC patients. For patient DP-45, multiple regions were sampled to establish WTCs and perform drug profiling. For the neoadjuvant setting validation study, DSS values correspond to WTCs of 15 BC patients. For the drug profiling assay, each compound covered five concentrations ranging from 10 μM to 1 nM (2 μM to 0.2 nM for trastuzumab and pertuzumab) in 10-fold dilutions and was dispensed using the acoustic liquid handling system Echo 550 (Labcyte Inc) to make spotted 384-well plates. For the neoadjuvant setting validation assay, we updated the cyclophosphamide into its active metabolite form 4-hydroperoxy cyclophosphamide (4-OOH-cyclophosphamide). Each relevant compound covered eight concentrations ranging from 10 μM to 1 nM (2 μM to 0.2 nM for trastuzumab and pertuzumab) and was dispensed using the Tecan D300e Digital Dispenser (Tecan) to make spotted 384-well plates. In both experiment settings, a total volume of 40 nl of each compound condition was dispensed into each well, for limiting the final DMSO concentration to 0.1% during the treatment period. Further details on the cell viability assay, as well as the DSS estimation are available in the Materials & Methods part of Chen et al., 2022.
Facebook
Twitter
According to our latest research, the global Multi-OEM VRF Data Normalization market size reached USD 1.14 billion in 2024, with a robust year-on-year growth trajectory. The market is expected to expand at a CAGR of 12.6% during the forecast period, reaching a projected value of USD 3.38 billion by 2033. This impressive growth is primarily fueled by the increasing adoption of Variable Refrigerant Flow (VRF) systems across multiple sectors, the proliferation of multi-OEM environments, and the rising demand for seamless data integration and analytics within building management systems. The market’s expansion is further supported by advancements in IoT, AI-driven analytics, and the urgent need for energy-efficient HVAC solutions worldwide.
One of the primary growth drivers for the Multi-OEM VRF Data Normalization market is the rapid digital transformation in the HVAC industry. Organizations are increasingly deploying VRF systems from multiple original equipment manufacturers (OEMs) to optimize performance, reduce costs, and future-proof their infrastructure. However, the lack of standardization in data formats across different OEMs presents significant integration challenges. Data normalization solutions bridge this gap by ensuring interoperability, enabling seamless aggregation, and facilitating advanced analytics for predictive maintenance and energy optimization. As facilities managers and building operators seek to harness actionable insights from disparate VRF systems, the demand for sophisticated data normalization platforms continues to rise, driving sustained market growth.
Another significant factor propelling market expansion is the growing emphasis on energy efficiency and sustainability. Regulatory mandates and green building certifications are pushing commercial, industrial, and residential end-users to adopt smart HVAC solutions that minimize energy consumption and carbon emissions. Multi-OEM VRF Data Normalization platforms play a pivotal role in this transition by enabling real-time monitoring, granular energy management, and automated system optimization across heterogeneous VRF networks. The ability to consolidate and analyze operational data from multiple sources not only enhances system reliability and occupant comfort but also helps organizations achieve compliance with stringent environmental standards, further fueling market adoption.
The proliferation of cloud computing, IoT connectivity, and AI-powered analytics is also transforming the Multi-OEM VRF Data Normalization landscape. Cloud-based deployment models offer unparalleled scalability, remote accessibility, and cost-efficiency, making advanced data normalization solutions accessible to a broader spectrum of users. Meanwhile, the integration of AI and machine learning algorithms enables predictive maintenance, anomaly detection, and automated fault diagnosis, reducing downtime and optimizing lifecycle costs. As more organizations recognize the strategic value of unified, normalized VRF data, investments in next-generation data normalization platforms are expected to accelerate, driving innovation and competitive differentiation in the market.
Regionally, the Asia Pacific market dominates the Multi-OEM VRF Data Normalization sector, accounting for the largest share in 2024, driven by rapid urbanization, robust construction activity, and widespread adoption of VRF technology in commercial and residential buildings. North America and Europe follow closely, fueled by stringent energy efficiency standards, a mature building automation ecosystem, and strong investments in smart infrastructure. Latin America and the Middle East & Africa are also witnessing steady growth, underpinned by rising demand for modern HVAC solutions and increasing awareness about the benefits of data-driven facility management. The regional outlook remains highly positive, with each geography contributing uniquely to the global market’s upward trajectory.
The Mul
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Background
The Infinium EPIC array measures the methylation status of > 850,000 CpG sites. The EPIC BeadChip uses a two-array design: Infinium Type I and Type II probes. These probe types exhibit different technical characteristics which may confound analyses. Numerous normalization and pre-processing methods have been developed to reduce probe type bias as well as other issues such as background and dye bias.
Methods
This study evaluates the performance of various normalization methods using 16 replicated samples and three metrics: absolute beta-value difference, overlap of non-replicated CpGs between replicate pairs, and effect on beta-value distributions. Additionally, we carried out Pearson’s correlation and intraclass correlation coefficient (ICC) analyses using both raw and SeSAMe 2 normalized data.
Results
The method we define as SeSAMe 2, which consists of the application of the regular SeSAMe pipeline with an additional round of QC, pOOBAH masking, was found to be the best-performing normalization method, while quantile-based methods were found to be the worst performing methods. Whole-array Pearson’s correlations were found to be high. However, in agreement with previous studies, a substantial proportion of the probes on the EPIC array showed poor reproducibility (ICC < 0.50). The majority of poor-performing probes have beta values close to either 0 or 1, and relatively low standard deviations. These results suggest that probe reliability is largely the result of limited biological variation rather than technical measurement variation. Importantly, normalizing the data with SeSAMe 2 dramatically improved ICC estimates, with the proportion of probes with ICC values > 0.50 increasing from 45.18% (raw data) to 61.35% (SeSAMe 2).
Methods
Study Participants and Samples
The whole blood samples were obtained from the Health, Well-being and Aging (Saúde, Ben-estar e Envelhecimento, SABE) study cohort. SABE is a cohort of census-withdrawn elderly from the city of São Paulo, Brazil, followed up every five years since the year 2000, with DNA first collected in 2010. Samples from 24 elderly adults were collected at two time points for a total of 48 samples. The first time point is the 2010 collection wave, performed from 2010 to 2012, and the second time point was set in 2020 in a COVID-19 monitoring project (9±0.71 years apart). The 24 individuals were 67.41±5.52 years of age (mean ± standard deviation) at time point one; and 76.41±6.17 at time point two and comprised 13 men and 11 women.
All individuals enrolled in the SABE cohort provided written consent, and the ethic protocols were approved by local and national institutional review boards COEP/FSP/USP OF.COEP/23/10, CONEP 2044/2014, CEP HIAE 1263-10, University of Toronto RIS 39685.
Blood Collection and Processing
Genomic DNA was extracted from whole peripheral blood samples collected in EDTA tubes. DNA extraction and purification followed manufacturer’s recommended protocols, using Qiagen AutoPure LS kit with Gentra automated extraction (first time point) or manual extraction (second time point), due to discontinuation of the equipment but using the same commercial reagents. DNA was quantified using Nanodrop spectrometer and diluted to 50ng/uL. To assess the reproducibility of the EPIC array, we also obtained technical replicates for 16 out of the 48 samples, for a total of 64 samples submitted for further analyses. Whole Genome Sequencing data is also available for the samples described above.
Characterization of DNA Methylation using the EPIC array
Approximately 1,000ng of human genomic DNA was used for bisulphite conversion. Methylation status was evaluated using the MethylationEPIC array at The Centre for Applied Genomics (TCAG, Hospital for Sick Children, Toronto, Ontario, Canada), following protocols recommended by Illumina (San Diego, California, USA).
Processing and Analysis of DNA Methylation Data
The R/Bioconductor packages Meffil (version 1.1.0), RnBeads (version 2.6.0), minfi (version 1.34.0) and wateRmelon (version 1.32.0) were used to import, process and perform quality control (QC) analyses on the methylation data. Starting with the 64 samples, we first used Meffil to infer the sex of the 64 samples and compared the inferred sex to reported sex. Utilizing the 59 SNP probes that are available as part of the EPIC array, we calculated concordance between the methylation intensities of the samples and the corresponding genotype calls extracted from their WGS data. We then performed comprehensive sample-level and probe-level QC using the RnBeads QC pipeline. Specifically, we (1) removed probes if their target sequences overlap with a SNP at any base, (2) removed known cross-reactive probes (3) used the iterative Greedycut algorithm to filter out samples and probes, using a detection p-value threshold of 0.01 and (4) removed probes if more than 5% of the samples having a missing value. Since RnBeads does not have a function to perform probe filtering based on bead number, we used the wateRmelon package to extract bead numbers from the IDAT files and calculated the proportion of samples with bead number < 3. Probes with more than 5% of samples having low bead number (< 3) were removed. For the comparison of normalization methods, we also computed detection p-values using out-of-band probes empirical distribution with the pOOBAH() function in the SeSAMe (version 1.14.2) R package, with a p-value threshold of 0.05, and the combine.neg parameter set to TRUE. In the scenario where pOOBAH filtering was carried out, it was done in parallel with the previously mentioned QC steps, and the resulting probes flagged in both analyses were combined and removed from the data.
Normalization Methods Evaluated
The normalization methods compared in this study were implemented using different R/Bioconductor packages and are summarized in Figure 1. All data was read into R workspace as RG Channel Sets using minfi’s read.metharray.exp() function. One sample that was flagged during QC was removed, and further normalization steps were carried out in the remaining set of 63 samples. Prior to all normalizations with minfi, probes that did not pass QC were removed. Noob, SWAN, Quantile, Funnorm and Illumina normalizations were implemented using minfi. BMIQ normalization was implemented with ChAMP (version 2.26.0), using as input Raw data produced by minfi’s preprocessRaw() function. In the combination of Noob with BMIQ (Noob+BMIQ), BMIQ normalization was carried out using as input minfi’s Noob normalized data. Noob normalization was also implemented with SeSAMe, using a nonlinear dye bias correction. For SeSAMe normalization, two scenarios were tested. For both, the inputs were unmasked SigDF Sets converted from minfi’s RG Channel Sets. In the first, which we call “SeSAMe 1”, SeSAMe’s pOOBAH masking was not executed, and the only probes filtered out of the dataset prior to normalization were the ones that did not pass QC in the previous analyses. In the second scenario, which we call “SeSAMe 2”, pOOBAH masking was carried out in the unfiltered dataset, and masked probes were removed. This removal was followed by further removal of probes that did not pass previous QC, and that had not been removed by pOOBAH. Therefore, SeSAMe 2 has two rounds of probe removal. Noob normalization with nonlinear dye bias correction was then carried out in the filtered dataset. Methods were then compared by subsetting the 16 replicated samples and evaluating the effects that the different normalization methods had in the absolute difference of beta values (|β|) between replicated samples.