Facebook
TwitterThe zip-file contains supplementary files (normalized data sets and R-codes) to reproduce the analyses presented in the paper "Use of pre-transformation to cope with extreme values in important candidate features" by Boulesteix, Guillemot & Sauerbrei (Biometrical Journal, 2011). The raw data (CEL-files) are publicly available and described in the following papers: - Ancona et al, 2006. On the statistical assessment of classifiers using DNA microarray data. BMC Bioinformatics 7, 387. - Miller et al, 2005. An expression signature for p53 status in human breast cancer predicts mutation status, transcriptional effects, and patient survival. Proceedings of the National Academy of Science 102, 13550–13555. - Minn et al, 2005. Genes that mediate breast cancer metastasis to lung. Nature 436, 518–524. - Pawitan et al, 2005. Gene expression profiling spares early breast cancer patients from adjuvant therapy: derived and validated in two population-based cohorts. Breast Cancer Research 7, R953–964. - Scherzer et al, 2007. Molecular markers of early parkinsons disease based on gene expression in blood. Proceedings of the National Academy of Science 104, 955-960. - Singh et al, 2002. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell 1, 203–209. - Sotiriou et al, 2006. Gene expression profiling in breast cancer: understanding the molecular basis of histologic grade to improve prognosis. Journal of the National Cancer Institute 98, 262–272. - Tang et al, 2009. Gene-expression profiling of peripheral blood mononuclear cells in sepsis. Critical Care Medicine 37, 882–888. - Wang et al, 2005. Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet 365, 671–679. - Irizarry, 2003. Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res 31 (4), e15. - Irizarry et al, 2006. Comparison of Affymetrix GeneChip expression measures. Bioinformatics 22 (7), 789–794.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data accompanies the manuscript "Cross-platform normalization enables machine learning model training on microarray and RNA-seq data simultaneously" by Foltz, Taroni, and Greene https://doi.org/10.1038/s42003-023-04588-6 Please refer to our github page. The file contains all the raw input data, output files needed for plotting, and the intermediate files (including models and normalized data) from one repeat (seed 3274).
Abstract: Large compendia of gene expression data have proven valuable for the discovery of novel biological relationships. Historically, the majority of available RNA assays were run on microarray, while RNA-seq is now the platform of choice for many new experiments. The data structure and distributions between the platforms differ, making it challenging to combine them directly. Here we perform supervised and unsupervised machine learning evaluations to assess which existing normalization methods are best suited for combining microarray and RNA-seq data. We find that quantile and Training Distribution Matching normalization allow for supervised and unsupervised model training on microarray and RNA-seq data simultaneously. Nonparanormal normalization and z-scores are also appropriate for some applications, including pathway analysis with Pathway-Level Information Extractor (PLIER). We demonstrate that it is possible to perform effective cross-platform normalization using existing methods to combine microarray and RNA-seq data for machine learning applications.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains a preprocessed subset of the MIMIC-IV dataset (Medical Information Mart for Intensive Care, Version IV), specifically focusing on laboratory event data related to glucose levels. It has been curated and processed for research on data normalization and integration within Clinical Decision Support Systems (CDSS) to improve Human-Computer Interaction (HCI) elements.
The dataset includes the following key features:
This data has been used to analyze the impact of normalization and integration techniques on improving data accuracy and usability in CDSS environments. The file is provided as part of ongoing research on enhancing clinical decision-making and user interaction in healthcare systems.
The data originates from the publicly available MIMIC-IV database, developed and maintained by the Massachusetts Institute of Technology (MIT). Proper ethical guidelines for accessing and preprocessing the dataset have been followed.
MIMIC-IV_LabEvents_Subset_Normalization.xlsx
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Background
The Infinium EPIC array measures the methylation status of > 850,000 CpG sites. The EPIC BeadChip uses a two-array design: Infinium Type I and Type II probes. These probe types exhibit different technical characteristics which may confound analyses. Numerous normalization and pre-processing methods have been developed to reduce probe type bias as well as other issues such as background and dye bias.
Methods
This study evaluates the performance of various normalization methods using 16 replicated samples and three metrics: absolute beta-value difference, overlap of non-replicated CpGs between replicate pairs, and effect on beta-value distributions. Additionally, we carried out Pearson’s correlation and intraclass correlation coefficient (ICC) analyses using both raw and SeSAMe 2 normalized data.
Results
The method we define as SeSAMe 2, which consists of the application of the regular SeSAMe pipeline with an additional round of QC, pOOBAH masking, was found to be the best-performing normalization method, while quantile-based methods were found to be the worst performing methods. Whole-array Pearson’s correlations were found to be high. However, in agreement with previous studies, a substantial proportion of the probes on the EPIC array showed poor reproducibility (ICC < 0.50). The majority of poor-performing probes have beta values close to either 0 or 1, and relatively low standard deviations. These results suggest that probe reliability is largely the result of limited biological variation rather than technical measurement variation. Importantly, normalizing the data with SeSAMe 2 dramatically improved ICC estimates, with the proportion of probes with ICC values > 0.50 increasing from 45.18% (raw data) to 61.35% (SeSAMe 2).
Methods
Study Participants and Samples
The whole blood samples were obtained from the Health, Well-being and Aging (Saúde, Ben-estar e Envelhecimento, SABE) study cohort. SABE is a cohort of census-withdrawn elderly from the city of São Paulo, Brazil, followed up every five years since the year 2000, with DNA first collected in 2010. Samples from 24 elderly adults were collected at two time points for a total of 48 samples. The first time point is the 2010 collection wave, performed from 2010 to 2012, and the second time point was set in 2020 in a COVID-19 monitoring project (9±0.71 years apart). The 24 individuals were 67.41±5.52 years of age (mean ± standard deviation) at time point one; and 76.41±6.17 at time point two and comprised 13 men and 11 women.
All individuals enrolled in the SABE cohort provided written consent, and the ethic protocols were approved by local and national institutional review boards COEP/FSP/USP OF.COEP/23/10, CONEP 2044/2014, CEP HIAE 1263-10, University of Toronto RIS 39685.
Blood Collection and Processing
Genomic DNA was extracted from whole peripheral blood samples collected in EDTA tubes. DNA extraction and purification followed manufacturer’s recommended protocols, using Qiagen AutoPure LS kit with Gentra automated extraction (first time point) or manual extraction (second time point), due to discontinuation of the equipment but using the same commercial reagents. DNA was quantified using Nanodrop spectrometer and diluted to 50ng/uL. To assess the reproducibility of the EPIC array, we also obtained technical replicates for 16 out of the 48 samples, for a total of 64 samples submitted for further analyses. Whole Genome Sequencing data is also available for the samples described above.
Characterization of DNA Methylation using the EPIC array
Approximately 1,000ng of human genomic DNA was used for bisulphite conversion. Methylation status was evaluated using the MethylationEPIC array at The Centre for Applied Genomics (TCAG, Hospital for Sick Children, Toronto, Ontario, Canada), following protocols recommended by Illumina (San Diego, California, USA).
Processing and Analysis of DNA Methylation Data
The R/Bioconductor packages Meffil (version 1.1.0), RnBeads (version 2.6.0), minfi (version 1.34.0) and wateRmelon (version 1.32.0) were used to import, process and perform quality control (QC) analyses on the methylation data. Starting with the 64 samples, we first used Meffil to infer the sex of the 64 samples and compared the inferred sex to reported sex. Utilizing the 59 SNP probes that are available as part of the EPIC array, we calculated concordance between the methylation intensities of the samples and the corresponding genotype calls extracted from their WGS data. We then performed comprehensive sample-level and probe-level QC using the RnBeads QC pipeline. Specifically, we (1) removed probes if their target sequences overlap with a SNP at any base, (2) removed known cross-reactive probes (3) used the iterative Greedycut algorithm to filter out samples and probes, using a detection p-value threshold of 0.01 and (4) removed probes if more than 5% of the samples having a missing value. Since RnBeads does not have a function to perform probe filtering based on bead number, we used the wateRmelon package to extract bead numbers from the IDAT files and calculated the proportion of samples with bead number < 3. Probes with more than 5% of samples having low bead number (< 3) were removed. For the comparison of normalization methods, we also computed detection p-values using out-of-band probes empirical distribution with the pOOBAH() function in the SeSAMe (version 1.14.2) R package, with a p-value threshold of 0.05, and the combine.neg parameter set to TRUE. In the scenario where pOOBAH filtering was carried out, it was done in parallel with the previously mentioned QC steps, and the resulting probes flagged in both analyses were combined and removed from the data.
Normalization Methods Evaluated
The normalization methods compared in this study were implemented using different R/Bioconductor packages and are summarized in Figure 1. All data was read into R workspace as RG Channel Sets using minfi’s read.metharray.exp() function. One sample that was flagged during QC was removed, and further normalization steps were carried out in the remaining set of 63 samples. Prior to all normalizations with minfi, probes that did not pass QC were removed. Noob, SWAN, Quantile, Funnorm and Illumina normalizations were implemented using minfi. BMIQ normalization was implemented with ChAMP (version 2.26.0), using as input Raw data produced by minfi’s preprocessRaw() function. In the combination of Noob with BMIQ (Noob+BMIQ), BMIQ normalization was carried out using as input minfi’s Noob normalized data. Noob normalization was also implemented with SeSAMe, using a nonlinear dye bias correction. For SeSAMe normalization, two scenarios were tested. For both, the inputs were unmasked SigDF Sets converted from minfi’s RG Channel Sets. In the first, which we call “SeSAMe 1”, SeSAMe’s pOOBAH masking was not executed, and the only probes filtered out of the dataset prior to normalization were the ones that did not pass QC in the previous analyses. In the second scenario, which we call “SeSAMe 2”, pOOBAH masking was carried out in the unfiltered dataset, and masked probes were removed. This removal was followed by further removal of probes that did not pass previous QC, and that had not been removed by pOOBAH. Therefore, SeSAMe 2 has two rounds of probe removal. Noob normalization with nonlinear dye bias correction was then carried out in the filtered dataset. Methods were then compared by subsetting the 16 replicated samples and evaluating the effects that the different normalization methods had in the absolute difference of beta values (|β|) between replicated samples.
Facebook
TwitterSichkar V. N. Effect of various dimension convolutional layer filters on traffic sign classification accuracy. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2019, vol. 19, no. 3, pp. DOI: 10.17586/2226-1494-2019-19-3-546-552 (Full-text available here ResearchGate.net/profile/Valentyn_Sichkar)
Test online with custom Traffic Sign here: https://valentynsichkar.name/mnist.html
Design, Train & Test deep CNN for Image Classification. Join the course & enjoy new opportunities to get deep learning skills: https://www.udemy.com/course/convolutional-neural-networks-for-image-classification/
https://github.com/sichkar-valentyn/1-million-images-for-Traffic-Signs-Classification-tasks/blob/main/images/slideshow_classification.gif?raw=true%20=470x516" alt="CNN Course" title="CNN Course">
https://github.com/sichkar-valentyn/1-million-images-for-Traffic-Signs-Classification-tasks/blob/main/images/concept_map.png?raw=true%20=570x410" alt="Concept map" title="Concept map">
https://www.udemy.com/course/convolutional-neural-networks-for-image-classification/
This is ready to use preprocessed data saved into pickle file.
Preprocessing stages are as follows:
- Normalizing whole data by dividing / 255.0.
- Dividing whole data into three datasets: train, validation and test.
- Normalizing whole data by subtracting mean image and dividing by standard deviation.
- Transposing every dataset to make channels come first.
mean image and standard deviation were calculated from train dataset and applied to all datasets.
When using user's image for classification, it has to be preprocessed firstly in the same way: normalized, subtracted with mean image and divided by standard deviation.
Data written as dictionary with following keys:
x_train: (59000, 1, 28, 28)
y_train: (59000,)
x_validation: (1000, 1, 28, 28)
y_validation: (1000,)
x_test: (1000, 1, 28, 28)
y_test: (1000,)
Contains pretrained weights model_params_ConvNet1.pickle for the model with following architecture:
Input --> Conv --> ReLU --> Pool --> Affine --> ReLU --> Affine --> Softmax
Parameters:
Pool is 2 and height = width = 2.
Architecture also can be understood as follows:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3400968%2Fc23041248e82134b7d43ed94307b720e%2FModel_1_Architecture_MNIST.png?generation=1563654250901965&alt=media" alt="">
Initial data is MNIST that was collected by Yann LeCun, Corinna Cortes, Christopher J.C. Burges.
Facebook
TwitterSichkar V. N. Effect of various dimension convolutional layer filters on traffic sign classification accuracy. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2019, vol. 19, no. 3, pp. DOI: 10.17586/2226-1494-2019-19-3-546-552 (Full-text available here ResearchGate.net/profile/Valentyn_Sichkar)
Test online with custom Traffic Sign here: https://valentynsichkar.name/cifar10.html
Design, Train & Test deep CNN for Image Classification. Join the course & enjoy new opportunities to get deep learning skills: https://www.udemy.com/course/convolutional-neural-networks-for-image-classification/
https://github.com/sichkar-valentyn/1-million-images-for-Traffic-Signs-Classification-tasks/blob/main/images/slideshow_classification.gif?raw=true%20=470x516" alt="CNN Course" title="CNN Course">
https://github.com/sichkar-valentyn/1-million-images-for-Traffic-Signs-Classification-tasks/blob/main/images/concept_map.png?raw=true%20=570x410" alt="Concept map" title="Concept map">
https://www.udemy.com/course/convolutional-neural-networks-for-image-classification/
This is ready to use preprocessed data saved into pickle file.
Preprocessing stages are as follows:
- Normalizing whole data by dividing / 255.0.
- Dividing whole data into three datasets: train, validation and test.
- Normalizing whole data by subtracting mean image and dividing by standard deviation.
- Transposing every dataset to make channels come first.
mean image and standard deviation were calculated from train dataset and applied to all datasets.
When using user's image for classification, it has to be preprocessed firstly in the same way: normalized, subtracted with mean image and divided by standard deviation.
Data written as dictionary with following keys:
x_train: (49000, 3, 32, 32)
y_train: (49000,)
x_validation: (1000, 3, 32, 32)
y_validation: (1000,)
x_test: (1000, 3, 32, 32)
y_test: (1000,)
Contains pretrained weights model_params_ConvNet1.pickle for the model with following architecture:
Input --> Conv --> ReLU --> Pool --> Affine --> ReLU --> Affine --> Softmax
Parameters:
Pool is 2 and height = width = 2.
Architecture also can be understood as follows:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3400968%2F5d50bf46a9494d60016759b4690e6662%2FModel_1_Architecture.png?generation=1563650302359604&alt=media" alt="">
Initial data is CIFAR-10 that was collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This is a dataset of spectrogram images created from the train_spectrograms parquet data from the Harvard Medical School Harmful Brain Activity Classification competition. The parquet files have been transformed with the following code, referencing the HMS-HBAC: KerasCV Starter Notebook
def process_spec(spec_id, split="train"):
# read the data
data = pd.read_parquet(path/f'{split}_spectrograms'/f'{spec_id}.parquet')
# read the label
label = unique_df[unique_df.spectrogram_id == spec_id]["target"].item()
# replace NA with 0
data = data.fillna(0)
# convert DataFrame to array
data = data.values[:, 1:]
# transpose
data = data.T
data = data.astype("float32")
# clip data to avoid 0s
data = np.clip(data, math.exp(-4), math.exp(8))
# take log data to magnify differences
data = np.log(data)
# normalize data
data=(data-data.mean())/data.std() + 1e-6
# convert to 3 channels
data = np.tile(data[..., None], (1, 1, 3))
# convert array to PILImage
im = PILImage.create(Image.fromarray((data * 255).astype(np.uint8)))
im.save(f"{SPEC_DIR}/{split}_spectrograms/{label}/{spec_id}.png")
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset was generated as part of the publication Let’s Make a Splan: Risk-Aware Trajectory Optimization in a Normalized Gaussian Splat. This work introduces SPLANNING, a risk-aware method for motion planning in scenes represented via Normalized 3D Gaussian Splatting. To test the method, 300 scenes were randomly generated, and Gaussian Splats were trained to represent the scenes. Then, a risk-aware trajectory optimizer was formulated to avoid collisions in scenes represented by a Normalized 3D Gaussian Splat.
Code to run experiments and the planner is provided in this github: https://github.com/roahmlab/splanning.
Facebook
TwitterNormalized Digital Surface Model - 1m resolution. The dataset contains the 1m Normalized Digital Surface Model for the District of Columbia. Some areas have limited data. The lidar dataset redaction was conducted under the guidance of the United States Secret Service. All data returns were removed from the dataset within the United States Secret Service redaction boundary except for classified ground points and classified water points.
Facebook
Twitter(1) qPCR Gene Expression Data The THP-1 cell line was sub-cloned and one clone (#5) was selected for its ability to differentiate relatively homogeneously in response to phorbol 12-myristate-13-acetate (PMA) (Sigma). THP-1.5 was used for all subsequent experiments. THP-1.5 cells were cultured in RPMI, 10% FBS, Penicillin/Streptomycin, 10mM HEPES, 1mM Sodium Pyruvate, 50uM 2-Mercaptoethanol. THP-1.5 were treated with 30ng/ml PMA over a time-course of 96h. Total cell lysates were harvested in TRIzol reagent at 1, 2, 4, 6, 12, 24, 48, 72, 96 hours, including an undifferentiated control. Undifferentiated cells were harvested in TRIzol reagent at the beginning of the LPS time-course. One biological replicate was prepared for each time point. Total RNA was purified from TRIzol lysates according to manufacturer’s instructions. Genespecific primer pairs were designed using Primer3 software, with an optimal primer size of 20 bases, amplification size of 140bp, and annealing temperature of 60°C. Primer sequences were designed for 2,396 candidate genes including four potential controls: GAPDH, beta actin (ACTB), beta-2-microglobulin (B2M), phosphoglycerate kinase 1 (PGK1). The RNA samples were reverse transcribed to produce cDNA and then subjected to quantitative PCR using SYBR Green (Molecular Probes) using the ABI Prism 7900HT system (Applied Biosystems, Foster City, CA, USA) with a 384-well amplification plate; genes for each sample were assayed in triplicate. Reactions were carried out in 20μL volumes in 384-well plates; each reaction contained: 0.5 U of HotStar Taq DNA polymerase (Qiagen) and the manufacturer’s 1× amplification buffer adjusted to a final concentration of 1mM MgCl2, 160μM dNTPs, 1/38000 SYBR Green I (Molecular Probes), 7% DMSO, 0.4% ROX Reference Dye (Invitrogen), 300 nM of each primer (forward and reverse), and 2μL of 40-fold diluted first-strand cDNA synthesis reaction mixture (12.5ng total RNA equivalent). Polymerase activation at 95ºC for 15 min was followed by 40 cycles of 15 s at 94ºC, 30 s at 60ºC, and 30 s at 72ºC. The dissociation curve analysis, which evaluates each PCR product to be amplified from single cDNA, was carried out in accordance with the manufacturer’s protocol. Expression levels were reported as Ct values. The large number of genes assayed and the replicates measures required that samples be distributed across multiple amplification plates, with an average of twelve plates per sample. Because it was envisioned that GAPDH would serve as a single-gene normalization control, this gene was included on each plate. All primer pairs were replicated in triplicates. Raw qPCR expression measures were quantified using Applied Biosystems SDS software and reported as Ct values. The Ct value represents the number of cycles or rounds of amplification required for the fluorescence of a gene or primer pair to surpass an arbitrary threshold. The magnitude of the Ct value is inversely proportional to the expression level so that a gene expressed at a high level will have a low Ct value and vice versa. Replicate Ct values were combined by averaging, with additional quality control constraints imposed by a standard filtering method developed by the RIKEN group for the preprocessing of their qPCR data. Briefly this method entails: 1. Sort the triplicate Ct values in ascending order: Ct1, Ct2, Ct3. Calculate differences between consecutive Ct values: difference1 = Ct2 – Ct1 and difference2 = Ct3 – Ct2. 2. Four regions are defined (where Region4 overrides the other regions): Region1: difference ≦ 0.2, Region2: 0.2 < difference ≦ 1.0, Region3: 1.0 < difference, Region4: one of the Ct values in the difference calculation is 40 If difference1 and difference2 fall in the same region, then the three replicate Ct values are averaged to give a final representative measure. If difference1 and difference2 are in different regions, then the two replicate Ct values that are in the small number region are averaged instead. This particular filtering method is specific to the data set we used here and does not represent a part of the normalization procedure itself; Alternate methods of filtering can be applied if appropriate prior to normalization. Moreover while the presentation in this manuscript has used Ct values as an example, any measure of transcript abundance, including those corrected for primer efficiency can be used as input to our data-driven methods. (2) Quantile Normalization Algorithm Quantile normalization proceeds in two stages. First, if samples are distributed across multiple plates, normalization is applied to all of the genes assayed for each sample to remove plate-to-plate effects by enforcing the same quantile distribution on each plate. Then, an overall quantile normalization is applied between samples, assuring that each sample has the same distribution of expression values as all of the other samples to be compared. A similar approach using quantile ormalization has been previously described in the context of microarray normalization. Briefly, our method entails the following steps: i) qPCR data from a single RNA sample are stored in a matrix M of dimension k (maximum number of genes or primer pairs on a plate) rows by p (number of plates) columns. Plates with differing numbers of genes are made equivalent by padded plates with missing values to constrain M to a rectangular structure. ii) Each column is sorted into ascending order and stored in matrix M’. The sorted columns correspond to the quantile distribution of each plate. The missing values are placed at the end of each ordered column. All calculations in quantile normalization are performed on non-missing values. iii) The average quantile distribution is calculated by taking the average of each row in M’. Each column in M’ is replaced by this average quantile distribution and rearranged to have the same ordering as the original row order in M. This gives the within-sample normalized data from one RNA sample. iv) Steps analogous to 1 – 3 are repeated for each sample. Between-sample normalization is performed by storing the within-normalized data as a new matrix N of dimension k (total number of genes, in our example k = 2,396) rows by n (number of samples) columns. Steps 2 and 3 are then applied to this matrix. (3) Rank-Invariant Set Normalization Algorithm We describe an extension of this method for use on qPCR data with any number of experimental conditions or samples in which we identify a set of stably-expressed genes from within the measured expression data and then use these to adjust expression between samples. Briefly, i) qPCR data from all samples are stored in matrix R of dimension g (total number of genes or primer pairs used for all plates) rows by s (total number of samples). ii) We first select gene sets that are rank-invariant across a single sample compared to a common reference. The reference may be chosen in a variety of ways, depending on the experimental design and aims of the experiment. As described in Tseng et al., the reference may be designated as a particular sample from the experiment (e.g. time zero in a time course experiment), the average or median of all samples, or selecting the sample which is closest to the average or median of all samples. Genes are considered to be rank-invariant if they retain their ordering or rank with respect to expression across the experimental sample versus the common reference sample. We collect sets of rank-invariant genes for all of the s pairwise comparisons, relative to a common reference. We take the intersection of all s sets to obtain the final set of rank-invariant genes that is used for normalization. iii) Let αj represent the average expression value of the rank-invariant genes in sample j. (α1, …, αs) then represents the vector of rank-invariant average expression values for all conditions 1 to s iv) We calculate the scale f The THP-1 cell line was sub-cloned and one clone (#5) was selected for its ability to differentiate relatively homogeneously in response to phorbol 12-myristate-13-acetate (PMA) (Sigma). THP-1.5 was used for all subsequent experiments. THP-1.5 cells were cultured in RPMI, 10% FBS, Penicillin/Streptomycin, 10mM HEPES, 1mM Sodium Pyruvate, 50uM 2-Mercaptoethanol. THP-1.5 were treated with 30ng/ml PMA over a time-course of 96h. Total cell lysates were harvested in TRIzol reagent at 1, 2, 4, 6, 12, 24, 48, 72, 96 hours, including an undifferentiated control. Total RNA was purifed from TRIzol lysates according to manufacturer’s instructions. The RNA samples were reverse transcribed to produce cDNA and then subjected to quantitative PCR using SYBR Green (Molecular Probes) using the ABI Prism 7900HT system (Applied Biosystems, Foster City, CA,USA) with a 384-well amplification plate; genes for each sample were assayed in triplicate.
Facebook
TwitterUniCourt provides legal data on law firms that’s been normalized by our AI and enriched with other public data sets to connect real-world law firms to their attorneys and clients, judges they’ve faced and types of litigation they’ve handled across practice areas and state and federal (PACER) courts.
AI Normalized Law Firms
• UniCourt’s AI locates and gathers variations of law firm names and spelling errors contained in court data and combines them with bar data, business data, and judge data to connect real-world law firms to their litigation. • Avoid bad data caused by frequent law firm name changes due to firm mergers, named partners leaving, and firms dissolving, leading to lost business and bad analytics. • UniCourt’s unique normalized IDs for law firms let you quickly search for and download all of the litigation involving the specific firms you’re interested in. • Uncover the associations and relationships between law firms, their lawyers, their clients, judges, and their top practice areas across different jurisdictions.
Using APIs to Dig Deeper
• See a full list of all of the businesses and individuals a law firm has represented as clients in litigation. • Easily vet the bench strength of law firms by looking at the volume and specific types of cases their lawyers have handled. • Drill down into a law firm’s experience to confirm which judges they’ve appeared before in court. • Identify which law firms and lawyers a particular firm has faced as opposing counsel, and the judgments they obtained.
Bulk Access to Law Firm Data
• UniCourt’s Law Firm Data API provides you with structured, cleaned, and organized legal data that you can easily connect to your case management systems, CRM, and other internal applications. • Get bulk access to law firm Secretary of State registration data and the names, emails, phone numbers, and physical addresses for all of a firm’s lawyers. • Use our APIs to create tailored legal marketing campaigns for law firms and their attorneys with the exact practice area expertise and the right geographic coverage you want to target. • Power your case research, business intelligence, and analytics with bulk access to litigation data for all the court cases a firm has handled and set up automated data feeds to find new cases they’re involved in.
Facebook
TwitterUniCourt’s PACER API provides you with a real-time interface and bulk access to the entire PACER database of civil and criminal federal court data from U.S. District Courts, Bankruptcy Courts, Courts of Appeal, and more.
Our PACER API fully integrates with PACER data so you can streamline pulling the court data you need to automate your internal workflows while saving money on outrageous fees.
Leave behind PACER’s outdated search tools for a modern case search with the precision you need.
Search Smarter and Curb Costs
• With UniCourt’s PACER API you can download the court data you need and lower your PACER costs by pulling data smarter. • When you search for court cases using our API for PACER, your search results show (1) which cases are already available in UniCourt, (2) when they were added to our database and last updated, and (3) the UniCourt Case IDs for each case so you can easily pull any additional data you need. • Don’t pay for PACER data when you don’t have to and stop wasting time logging into PACER everyday when there’s a smarter way to search.
Bulk Access to PACER Data and Documents
• Get the complete historical data set you need for criminal and civil PACER data seamlessly integrated with all your internal applications and client facing solutions. • Leverage UniCourt's extensive free repository of case metadata, docket entries, and court documents to get bulk API access to PACER data without breaking your budget. • Get bulk court data from PACER that has been normalized with our artificial intelligence and enriched with other public data sets like attorney bar data, Secretary of State data, and judicial data.
Track PACER Litigation at Scale
• Combine the power of UniCourt’s PACER API with our Court Data API to track your litigation at scale. • Automatically track PACER cases with ease and receive alerts when new docket updates are available so you never miss a federal court filing. • Save money on outrageous PACER fees by leveraging the sophisticated algorithms we’ve developed to intelligently track court cases in bulk without incurring over-the-top fees.
Facebook
TwitterThis data set contains energy use data from 2009-2014 for 139 municipally operated buildings. Metrics include: Site & Source EUI, annual electricity, natural gas and district steam consumption, greenhouse gas emissions and energy cost. Weather-normalized data enable building performance comparisons over time, despite unusual weather events.
Facebook
Twitterhttps://www.statcan.gc.ca/en/terms-conditions/open-licencehttps://www.statcan.gc.ca/en/terms-conditions/open-licence
This table contains 3526 series, with data for years 1987 - 2016 (not all combinations necessarily have data for all years). This table contains data described by the following dimensions (Not all combinations are available): Geography (2176 items: Census Agricultural Region 1, Newfoundland and Labrador [1001]; Division No. 1, Subd. C, Newfoundland and Labrador [1001214]; Census Agricultural Region 2, Newfoundland and Labrador [1002]; Division No. 6, Subd. C, Newfoundland and Labrador [1006014]; ...) Land use (3 items: Total agriculture; Cropland; Grassland).
Facebook
Twitteryield_dataN_trial_datagreenseeker_comparisonfarm_survey_data
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Water quality data. These data have been normalised to their means over the time period with a normalised mean of 100.
Facebook
TwitterThe NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Web-Enabled Landsat Data Annual (GWELDYR) Version 3.1 data product provides Landsat data at 30 meter (m) resolution for terrestrial non-Antarctica locations over annual reporting periods for the 1985, 1990, and 2000 epochs. GWELD data products are generated from all available Landsat 4 and 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data in the U.S. Geological Survey (USGS) Landsat archive. The GWELD suite of products provide consistent data to derive land cover as well as geophysical and biophysical information for regional assessment of land surface dynamics.
The GWELD products include Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) for the reflective wavelength bands and top of atmosphere (TOA) brightness temperature for the thermal bands. The products are defined in the Sinusoidal coordinate system to promote continuity of NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) land tile grid.
Provided in the GWELDYR product are layers for surface reflectance bands 1 through 5 and 7, TOA brightness temperature for thermal bands, Normalized Difference Vegetation Index (NDVI), day of year, ancillary angle, and data quality information. A low-resolution red, green, blue (RGB) browse image of bands 5, 4, 3 is also available for each granule.
Known Issues * GWELDYR known issues can be found in Section 4 of the Algorithm Theoretical Basis Document (ATBD).
Facebook
TwitterDataset Summary
Persian data of this dataset is a collection of 400k blog posts (RohanAiLab/persian_blog). these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling, creating tokenizer and text generation tasks.
To see Persian data in Viewer tab click here
English data of this dataset is merged from english-wiki-corpus dataset. Note: If you need only Persian corpus click here Note: The data for both Persian… See the full description on the dataset page: https://huggingface.co/datasets/ali619/corpus-dataset-normalized-for-persian-and-english.
Facebook
TwitterThis regional land cover classification is based on the use of multitemporal 1-km Advanced Very High Resolution Radiometer (AVHRR) National Oceanic and Atmospheric Administration (NOAA 11) data that were analyzed in combination with selected Landsat Thematic Mapper (TM) and extensive field observations within a 619-km by 821-km subset of the 1,000-km by 1,000-km BOReal Ecosystem-Atmosphere Study (BOREAS) region (Steyaert et al., 1997). Following the approach developed by Loveland et al. (1991) for 1-km AVHRR land cover mapping in the conterminous United States, monthly Normalized Difference Vegetation Index (NDVI) image composites (April-September 1992) of this subset in the BOREAS region were used in an unsupervised image cluster analysis algorithm to develop an initial set of seasonal land cover classes. Extensive ground data with Global Positioning System (GPS) georeferencing, observations from low-level aerial flights over remote areas, and selected Landsat image composites for the study areas were analyzed to split, aggregate, and label the spectral-temporal clusters throughout the BOREAS region. Landsat TM image composites (bands 5, 4, and 3) were available for the 100-km by 100-km Northern Study Area (NSA) and Southern Study Area (SSA). This AVHRR land cover product was compared with Landsat TM land cover classifications for the BOREAS study areas (Steyaert et al., 1997). Companion files include example thumbnail images that may be viewed and the image data files downloaded using a convenient viewer utility.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Normalized Digital Surface Model - 1m resolution. The dataset contains the 1m Normalized Digital Surface Model for the District of Columbia. These lidar data are processed classified LAS 1.4 files at USGS QL1 covering the District of Columbia. Some areas have limited data. The lidar dataset redaction was conducted under the guidance of the United States Secret Service. All data returns were removed from the dataset within the United States Secret Service redaction boundary except for classified ground points and classified water points.
Facebook
TwitterThe zip-file contains supplementary files (normalized data sets and R-codes) to reproduce the analyses presented in the paper "Use of pre-transformation to cope with extreme values in important candidate features" by Boulesteix, Guillemot & Sauerbrei (Biometrical Journal, 2011). The raw data (CEL-files) are publicly available and described in the following papers: - Ancona et al, 2006. On the statistical assessment of classifiers using DNA microarray data. BMC Bioinformatics 7, 387. - Miller et al, 2005. An expression signature for p53 status in human breast cancer predicts mutation status, transcriptional effects, and patient survival. Proceedings of the National Academy of Science 102, 13550–13555. - Minn et al, 2005. Genes that mediate breast cancer metastasis to lung. Nature 436, 518–524. - Pawitan et al, 2005. Gene expression profiling spares early breast cancer patients from adjuvant therapy: derived and validated in two population-based cohorts. Breast Cancer Research 7, R953–964. - Scherzer et al, 2007. Molecular markers of early parkinsons disease based on gene expression in blood. Proceedings of the National Academy of Science 104, 955-960. - Singh et al, 2002. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell 1, 203–209. - Sotiriou et al, 2006. Gene expression profiling in breast cancer: understanding the molecular basis of histologic grade to improve prognosis. Journal of the National Cancer Institute 98, 262–272. - Tang et al, 2009. Gene-expression profiling of peripheral blood mononuclear cells in sepsis. Critical Care Medicine 37, 882–888. - Wang et al, 2005. Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet 365, 671–679. - Irizarry, 2003. Summaries of Affymetrix GeneChip probe level data. Nucleic Acids Res 31 (4), e15. - Irizarry et al, 2006. Comparison of Affymetrix GeneChip expression measures. Bioinformatics 22 (7), 789–794.