Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Two-tailed Mann-Whitney U test was used for calculating p-values. The statistical power was calculated for a Student's t-test using statistical parameters of log2 transformed expression data. Sample size (n) is number per group needed to obtain a power of at least 0.8. Nppb: group 2 vs. group 3; Vcam1: group 1 vs. group 3.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This source code was published as supporting material for the article: Ralph G. Andrzejak, Anaïs Espinoso, Eduardo García-Portugués, Arthur Pewsey, Jacopo Epifanio, Marc G. Leguia, Kaspar Schindler; High expectations on phase locking: Better quantifying the concentration of circular data. Chaos (2023); 33 (9): 091106. https://doi.org/10.1063/5.0166468
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes intermediate data from RiboBase that generates translation efficiency (TE). The code to generate the files can be found at https://github.com/CenikLab/TE_model.
We uploaded demo HeLa .ribo files, but due to the large storage requirements of the full dataset, I recommend contacting Dr. Can Cenik directly to request access to the complete version of RiboBase if you need the original data.
The detailed explanation for each file:
human_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in human.
human_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in human.
human_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in human.
human_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in human.
human_TE_rho.rda: TE proportional similarity data as genes by genes matrix in human.
mouse_flatten_ribo_clr.rda: ribosome profiling clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_rna_clr.rda: matched RNA-seq clr normalized data with GEO GSM ids in columns and genes in rows in mouse.
mouse_flatten_te_clr.rda: TE clr data with GEO GSM ids in columns and genes in rows in mouse.
mouse_TE_cellline_all_plain.csv: TE clr data with genes in rows and cell lines in rows in mouse.
mouse_RNA_rho_new.rda: matched RNA-seq proportional similarity data as genes by genes matrix in mouse.
mouse_TE_rho.rda: TE proportional similarity data as genes by genes matrix in mouse.
All the data was passed quality control. There are 1054 mouse samples and 835 mouse samples:
* coverage > 0.1 X
* CDS percentage > 70%
* R2 between RNA and RIBO >= 0.188 (remove outliers)
All ribosome profiling data here is non-dedup winsorizing data paired with RNA-seq dedup data without winsorizing (even though it names as flatten, it just the same format of the naming)
####code
If you need to read rda data please use load("rdaname.rda") with R
If you need to calculate proportional similarity from clr data:
library(propr)
human_TE_homo_rho <- propr:::lr2rho(as.matrix(clr_data))
rownames(human_TE_homo_rho) <- colnames(human_TE_homo_rho) <- rownames(clr_data)
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Real-time functional magnetic resonance imaging (rtfMRI) is a recently emerged technique that demands fast data processing within a single repetition time (TR), such as a TR of 2 seconds. Data preprocessing in rtfMRI has rarely involved spatial normalization, which can not be accomplished in a short time period. However, spatial normalization may be critical for accurate functional localization in a stereotactic space and is an essential procedure for some emerging applications of rtfMRI. In this study, we introduced an online spatial normalization method that adopts a novel affine registration (AFR) procedure based on principal axes registration (PA) and Gauss-Newton optimization (GN) using the self-adaptive β parameter, termed PA-GN(β) AFR and nonlinear registration (NLR) based on discrete cosine transform (DCT). In AFR, PA provides an appropriate initial estimate of GN to induce the rapid convergence of GN. In addition, the β parameter, which relies on the change rate of cost function, is employed to self-adaptively adjust the iteration step of GN. The accuracy and performance of PA-GN(β) AFR were confirmed using both simulation and real data and compared with the traditional AFR. The appropriate cutoff frequency of the DCT basis function in NLR was determined to balance the accuracy and calculation load of the online spatial normalization. Finally, the validity of the online spatial normalization method was further demonstrated by brain activation in the rtfMRI data.
IMPORTANT! PLEASE READ DISCLAIMER BEFORE USING DATA. This dataset backcasts estimated modeled savings for a subset of 2007-2012 completed projects in the Home Performance with ENERGY STAR® Program against normalized savings calculated by an open source energy efficiency meter available at https://www.openee.io/. Open source code uses utility-grade metered consumption to weather-normalize the pre- and post-consumption data using standard methods with no discretionary independent variables. The open source energy efficiency meter allows private companies, utilities, and regulators to calculate energy savings from energy efficiency retrofits with increased confidence and replicability of results. This dataset is intended to lay a foundation for future innovation and deployment of the open source energy efficiency meter across the residential energy sector, and to help inform stakeholders interested in pay for performance programs, where providers are paid for realizing measurable weather-normalized results. To download the open source code, please visit the website at https://github.com/openeemeter/eemeter/releases D I S C L A I M E R: Normalized Savings using open source OEE meter. Several data elements, including, Evaluated Annual Elecric Savings (kWh), Evaluated Annual Gas Savings (MMBtu), Pre-retrofit Baseline Electric (kWh), Pre-retrofit Baseline Gas (MMBtu), Post-retrofit Usage Electric (kWh), and Post-retrofit Usage Gas (MMBtu) are direct outputs from the open source OEE meter. Home Performance with ENERGY STAR® Estimated Savings. Several data elements, including, Estimated Annual kWh Savings, Estimated Annual MMBtu Savings, and Estimated First Year Energy Savings represent contractor-reported savings derived from energy modeling software calculations and not actual realized energy savings. The accuracy of the Estimated Annual kWh Savings and Estimated Annual MMBtu Savings for projects has been evaluated by an independent third party. The results of the Home Performance with ENERGY STAR impact analysis indicate that, on average, actual savings amount to 35 percent of the Estimated Annual kWh Savings and 65 percent of the Estimated Annual MMBtu Savings. For more information, please refer to the Evaluation Report published on NYSERDA’s website at: http://www.nyserda.ny.gov/-/media/Files/Publications/PPSER/Program-Evaluation/2012ContractorReports/2012-HPwES-Impact-Report-with-Appendices.pdf. This dataset includes the following data points for a subset of projects completed in 2007-2012: Contractor ID, Project County, Project City, Project ZIP, Climate Zone, Weather Station, Weather Station-Normalization, Project Completion Date, Customer Type, Size of Home, Volume of Home, Number of Units, Year Home Built, Total Project Cost, Contractor Incentive, Total Incentives, Amount Financed through Program, Estimated Annual kWh Savings, Estimated Annual MMBtu Savings, Estimated First Year Energy Savings, Evaluated Annual Electric Savings (kWh), Evaluated Annual Gas Savings (MMBtu), Pre-retrofit Baseline Electric (kWh), Pre-retrofit Baseline Gas (MMBtu), Post-retrofit Usage Electric (kWh), Post-retrofit Usage Gas (MMBtu), Central Hudson, Consolidated Edison, LIPA, National Grid, National Fuel Gas, New York State Electric and Gas, Orange and Rockland, Rochester Gas and Electric. How does your organization use this dataset? What other NYSERDA or energy-related datasets would you like to see on Open NY? Let us know by emailing OpenNY@nyserda.ny.gov.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset includes information about the level of impact Connect SoCal 2024 on transit travel distances and travel times in each Transportation Analysis Zone (TAZ) of the SCAG region based on 2050 estimates from SCAG's Travel Demand Model.This dataset was prepared to share more information from the maps in Connect SoCal 2024 Equity Analysis Technical Report. The development of this layer for the Equity Data Hub involved consolidating information, which led to a minor refinement in the normalization calculation to use Baseline population to normalize Baseline PHT/PMT and Plan population to normalize Plan PHT/PMT. In the Equity Analysis Technical Report, only Plan population is used to normalize the change in Transit PHT. This minor change does not affect the conclusions of the report. For more details on the methodology, please see the methodology section(s) of the Equity Analysis Technical Report: https://scag.ca.gov/sites/main/files/file-attachments/23-2987-tr-equity-analysis-final-040424.pdf?1712261887 For more details about SCAG's models, or to request model data, please see SCAG's website: https://scag.ca.gov/data-services-requests
This metadata record describes 99 streamflow (referred to as flow) metrics calculated using the observed flow records at 1851 streamflow gauges across the conterminous United States from 1950 to 2018. Calculation of these metrics are often used as dependent variables in statistical models to make predictions of these flow metrics at ungaged locations. Specifically, this record describes (1) the U.S. Geological Survey streamgauge identification number, (2) the 1-, 7-, and 30-day consecutive minimum flow normalized by drainage area, DA (Q1/DA, Q7/DA, and Q30/DA [cfs/sq km]), (3) the 1st, 10th, 25th, 50th, 75th, 90th, and 99th nonexceedence flows normalized by DA (P01/DA, P10/DA, P25/DA, P50/DA, P75/DA, P90/DA, P99/DA [cfs/sq km]), (4) the annual mean flows normalized by DA (Mean/DA [cfs/sq km]), (5) the coefficient of variation of the annual minimums and maximum flows (Vmin and Vmax [dimensionless]), the average annual duration of flow pulses less than P10 and greater than P90 (Dl and Dh [number of days]), (6) the average annual number of flow pulses less than P10 and greater than P90 (Fl and Fh [number of events]), (7) the average annual skew of daily flows (Skew [dimensionless]), (8) the number of days where flow greater than the previous day divided by the total number of days (daily rises [dimensionless]), (9) the low- and high-flow timing metrics for winter, spring, summer, and fall (Winter_Tl, Spring_Tl, Summer_Tl, Fall_Tl, Winter_Th, Spring_Th, Summer_Th, and Fall_Th [dimensionless]), (10) the monthly nonexceedence flows normalized by DA (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, and DEC P'X'/DA where the 'X'=10, 20, 50, 80, and 90 [cfs/sq km]), and (11) monthly mean flow normalized by DA (JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, and DEC mean/DA [cfs/sq km]). For more details for flow metrics related to (2) through (8) and (11), please see Eng, K., Grantham, T.E., Carlisle, D.M., and Wolock, D.M., 2017, Predictability and selection of hydrologic metrics in riverine ecohydrology: Freshwater Science, v. 36(4), p. 915-926 [Also available at https://doi.org/10.1086/694912]. For more details on (9), please see Eng, K., Carlisle, D.M., Grantham, T.E., Wolock, D.M., and Eng, R.L., 2019, Severity and extent of alterations to natural streamflow regimes based on hydrologic metrics in the conterminous United States, 1980-2014: U.S. Geological Survey Scientific Investigations Report 2019-5001, 25 p. [Also available at https://doi.org/10.3133/sir20195001]. For (10), all daily flow values for the month of interest across all years are ranked in descending order, and the flow values associated with 10, 20, 50, 80, and 90 percent of all flow values are assigned as the monthly percent values. The data are in a tab-delimited text format.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Technical biases are introduced in omics data sets during data generation and interfere with the ability to study biological mechanisms. Several normalization approaches have been proposed to minimize the effects of such biases, but fluctuations in the electrospray current during liquid chromatography–mass spectrometry gradients cause local and sample-specific bias not considered by most approaches. Here we introduce a software named NormalyzerDE that includes a generic retention time (RT)-segmented approach compatible with a wide range of global normalization approaches to reduce the effects of time-resolved bias. The software offers straightforward access to multiple normalization methods, allows for data set evaluation and normalization quality assessment as well as subsequent or independent differential expression analysis using the empirical Bayes Limma approach. When evaluated on two spike-in data sets the RT-segmented approaches outperformed conventional approaches by detecting more peptides (8–36%) without loss of precision. Furthermore, differential expression analysis using the Limma approach consistently increased recall (2–35%) compared to analysis of variance. The combination of RT-normalization and Limma was in one case able to distinguish 108% (2597 vs 1249) more spike-in peptides compared to traditional approaches. NormalyzerDE provides widely usable tools for performing normalization and evaluating the outcome and makes calculation of subsequent differential expression statistics straightforward. The program is available as a web server at http://quantitativeproteomics.org/normalyzerde.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is originated, conceived, designed and maintained by Xiaoke WANG, Zhiyun OUYANG and Yunjian LUO. To develop the China's normalized tree biomass equation dataset, we carried out an extensive survey and critical review of the literature (from 1978 to 2013) on biomass equations conducted in China. It consists of 5924 biomass equations for nearly 200 species (Equation sheet) and their associated background information (General sheet), showing sound geographical, climatic and forest vegetation coverages across China. The dataset is freely available for non-commercial scientific applications, provided it is appropriately cited. For further information, please read our Earth System Science Data article (https://doi.org/10.5194/essd-2019-1), or feel free to contact the authors.
(1) qPCR Gene Expression Data The THP-1 cell line was sub-cloned and one clone (#5) was selected for its ability to differentiate relatively homogeneously in response to phorbol 12-myristate-13-acetate (PMA) (Sigma). THP-1.5 was used for all subsequent experiments. THP-1.5 cells were cultured in RPMI, 10% FBS, Penicillin/Streptomycin, 10mM HEPES, 1mM Sodium Pyruvate, 50uM 2-Mercaptoethanol. THP-1.5 were treated with 30ng/ml PMA over a time-course of 96h. Total cell lysates were harvested in TRIzol reagent at 1, 2, 4, 6, 12, 24, 48, 72, 96 hours, including an undifferentiated control. Undifferentiated cells were harvested in TRIzol reagent at the beginning of the LPS time-course. One biological replicate was prepared for each time point. Total RNA was purified from TRIzol lysates according to manufacturer’s instructions. Genespecific primer pairs were designed using Primer3 software, with an optimal primer size of 20 bases, amplification size of 140bp, and annealing temperature of 60°C. Primer sequences were designed for 2,396 candidate genes including four potential controls: GAPDH, beta actin (ACTB), beta-2-microglobulin (B2M), phosphoglycerate kinase 1 (PGK1). The RNA samples were reverse transcribed to produce cDNA and then subjected to quantitative PCR using SYBR Green (Molecular Probes) using the ABI Prism 7900HT system (Applied Biosystems, Foster City, CA, USA) with a 384-well amplification plate; genes for each sample were assayed in triplicate. Reactions were carried out in 20μL volumes in 384-well plates; each reaction contained: 0.5 U of HotStar Taq DNA polymerase (Qiagen) and the manufacturer’s 1× amplification buffer adjusted to a final concentration of 1mM MgCl2, 160μM dNTPs, 1/38000 SYBR Green I (Molecular Probes), 7% DMSO, 0.4% ROX Reference Dye (Invitrogen), 300 nM of each primer (forward and reverse), and 2μL of 40-fold diluted first-strand cDNA synthesis reaction mixture (12.5ng total RNA equivalent). Polymerase activation at 95ºC for 15 min was followed by 40 cycles of 15 s at 94ºC, 30 s at 60ºC, and 30 s at 72ºC. The dissociation curve analysis, which evaluates each PCR product to be amplified from single cDNA, was carried out in accordance with the manufacturer’s protocol. Expression levels were reported as Ct values. The large number of genes assayed and the replicates measures required that samples be distributed across multiple amplification plates, with an average of twelve plates per sample. Because it was envisioned that GAPDH would serve as a single-gene normalization control, this gene was included on each plate. All primer pairs were replicated in triplicates. Raw qPCR expression measures were quantified using Applied Biosystems SDS software and reported as Ct values. The Ct value represents the number of cycles or rounds of amplification required for the fluorescence of a gene or primer pair to surpass an arbitrary threshold. The magnitude of the Ct value is inversely proportional to the expression level so that a gene expressed at a high level will have a low Ct value and vice versa. Replicate Ct values were combined by averaging, with additional quality control constraints imposed by a standard filtering method developed by the RIKEN group for the preprocessing of their qPCR data. Briefly this method entails: 1. Sort the triplicate Ct values in ascending order: Ct1, Ct2, Ct3. Calculate differences between consecutive Ct values: difference1 = Ct2 – Ct1 and difference2 = Ct3 – Ct2. 2. Four regions are defined (where Region4 overrides the other regions): Region1: difference ≦ 0.2, Region2: 0.2 < difference ≦ 1.0, Region3: 1.0 < difference, Region4: one of the Ct values in the difference calculation is 40 If difference1 and difference2 fall in the same region, then the three replicate Ct values are averaged to give a final representative measure. If difference1 and difference2 are in different regions, then the two replicate Ct values that are in the small number region are averaged instead. This particular filtering method is specific to the data set we used here and does not represent a part of the normalization procedure itself; Alternate methods of filtering can be applied if appropriate prior to normalization. Moreover while the presentation in this manuscript has used Ct values as an example, any measure of transcript abundance, including those corrected for primer efficiency can be used as input to our data-driven methods. (2) Quantile Normalization Algorithm Quantile normalization proceeds in two stages. First, if samples are distributed across multiple plates, normalization is applied to all of the genes assayed for each sample to remove plate-to-plate effects by enforcing the same quantile distribution on each plate. Then, an overall quantile normalization is applied between samples, assuring that each sample has the same distribution of expression values as all of the other samples to be compared. A similar approach using quantile ormalization has been previously described in the context of microarray normalization. Briefly, our method entails the following steps: i) qPCR data from a single RNA sample are stored in a matrix M of dimension k (maximum number of genes or primer pairs on a plate) rows by p (number of plates) columns. Plates with differing numbers of genes are made equivalent by padded plates with missing values to constrain M to a rectangular structure. ii) Each column is sorted into ascending order and stored in matrix M’. The sorted columns correspond to the quantile distribution of each plate. The missing values are placed at the end of each ordered column. All calculations in quantile normalization are performed on non-missing values. iii) The average quantile distribution is calculated by taking the average of each row in M’. Each column in M’ is replaced by this average quantile distribution and rearranged to have the same ordering as the original row order in M. This gives the within-sample normalized data from one RNA sample. iv) Steps analogous to 1 – 3 are repeated for each sample. Between-sample normalization is performed by storing the within-normalized data as a new matrix N of dimension k (total number of genes, in our example k = 2,396) rows by n (number of samples) columns. Steps 2 and 3 are then applied to this matrix. (3) Rank-Invariant Set Normalization Algorithm We describe an extension of this method for use on qPCR data with any number of experimental conditions or samples in which we identify a set of stably-expressed genes from within the measured expression data and then use these to adjust expression between samples. Briefly, i) qPCR data from all samples are stored in matrix R of dimension g (total number of genes or primer pairs used for all plates) rows by s (total number of samples). ii) We first select gene sets that are rank-invariant across a single sample compared to a common reference. The reference may be chosen in a variety of ways, depending on the experimental design and aims of the experiment. As described in Tseng et al., the reference may be designated as a particular sample from the experiment (e.g. time zero in a time course experiment), the average or median of all samples, or selecting the sample which is closest to the average or median of all samples. Genes are considered to be rank-invariant if they retain their ordering or rank with respect to expression across the experimental sample versus the common reference sample. We collect sets of rank-invariant genes for all of the s pairwise comparisons, relative to a common reference. We take the intersection of all s sets to obtain the final set of rank-invariant genes that is used for normalization. iii) Let αj represent the average expression value of the rank-invariant genes in sample j. (α1, …, αs) then represents the vector of rank-invariant average expression values for all conditions 1 to s iv) We calculate the scale f The THP-1 cell line was sub-cloned and one clone (#5) was selected for its ability to differentiate relatively homogeneously in response to phorbol 12-myristate-13-acetate (PMA) (Sigma). THP-1.5 was used for all subsequent experiments. THP-1.5 cells were cultured in RPMI, 10% FBS, Penicillin/Streptomycin, 10mM HEPES, 1mM Sodium Pyruvate, 50uM 2-Mercaptoethanol. THP-1.5 were treated with 30ng/ml PMA over a time-course of 96h. Total cell lysates were harvested in TRIzol reagent at 1, 2, 4, 6, 12, 24, 48, 72, 96 hours, including an undifferentiated control. Total RNA was purifed from TRIzol lysates according to manufacturer’s instructions. The RNA samples were reverse transcribed to produce cDNA and then subjected to quantitative PCR using SYBR Green (Molecular Probes) using the ABI Prism 7900HT system (Applied Biosystems, Foster City, CA,USA) with a 384-well amplification plate; genes for each sample were assayed in triplicate.
This dataset is an annual time-serie of Landsat Analysis Ready Data (ARD)-derived Normalized Difference Water Index (NDWI) computed from Landsat 5 Thematic Mapper (TM) and Landsat 8 Opeational Land Imager (OLI). To ensure a consistent dataset, Landsat 7 has not been used because the Scan Line Correct (SLC) failure creates gaps into the data. NDWI quantifies plant water content by measuring the difference between Near-Infrared (NIR) and Short Wave Infrared (SWIR) (or Green) channels using this generic formula: (NIR - SWIR) / (NIR + SWIR) For Landsat sensors, this corresponds to the following bands: Landsat 5, NDVI = (Band 4 – Band 2) / (Band 4 + Band 2). Landsat 8, NDVI = (Band 5 – Band 3) / (Band 5 + Band 3). NDWI values ranges from -1 to +1. NDWI is a good proxy for plant water stress and therefore useful for drought monitoring and early warning. NDWI is sometimes alos refered as Normalized Difference Moisture Index (NDMI) Standard Deviation is also provided for each time step. Data format: GeoTiff This dataset has been genereated with the Swiss Data Cube (http://www.swissdatacube.ch)
CERN-LHC. Measurements of normalized differential cross-sections for top-quark pair production as a function of the top-quark transverse momentum, and of the mass, transverse momentum, and rapidity of the ttbar system, in proton--proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV. The dataset corresponds to an integrated luminosity of 4.6 fb$^{-1}$, recorded in 2011 with the ATLAS detector at the CERN Large Hadron Collider. Events are selected in the lepton+jets channel, requiring exactly one lepton and at least four jets with at least one of the jets tagged as originating from a b-quark. The measured spectra are corrected for detector efficiency and resolution effects and are compared to several Monte Carlo simulations and theory calculations. The results are in fair agreement with the predictions in a wide kinematic range. Nevertheless, data distributions are softer than predicted for higher values of the mass of the $t\bar{t}$ system and of the top-quark transverse momentum. The measurements can also discriminate among different sets of parton distribution functions. UPDATE (15 JUN 2015): increased the number of significative digits and added full covariance matrices. The reported values of the normalized differential cross-sections and corresponding covariance matrices are given with a large number of significative digits since the constraints set by the normalization condition need to be accurately fulfilled for a correct evaluation of the chi2. In particular, since the normalization condition lowers by one the rank of the covariance matrix, the chi2 should be calculated removing one measurement and the corresponding row and column of the covariance matrix (see Section XI). The normalization constraints guarantee that the chi2 does not depend on the removed measurement.
This layer includes Landsat 8 and 9 imagery rendered on-the-fly as Normalized Difference Moisture Index (NDMI) Colorized for use in visualization and analysis. This layer is time enabled and includes a number of band combinations and indices rendered on demand. The imagery includes eight multispectral bands from the Operational Land Imager (OLI) and two bands from the Thermal Infrared Sensor (TIRS). It is updated daily with new imagery directly sourced from the USGS Landsat collection on AWS.Geographic CoverageGlobal Land Surface.Polar regions are available in polar-projected Imagery Layers: Landsat Arctic Views and Landsat Antarctic Views.Temporal CoverageThis layer is updated daily with new imagery.Working in tandem, Landsat 8 and 9 revisit each point on Earth's land surface every 8 days.Most images collected from January 2015 to present are included.Approximately 5 images for each path/row from 2013 and 2014 are also included.Product LevelThe Landsat 8 and 9 imagery in this layer is comprised of Collection 2 Level-1 data.The imagery has Top of Atmosphere (TOA) correction applied.TOA is applied using the radiometric rescaling coefficients provided the USGS.The TOA reflectance values (ranging 0 – 1 by default) are scaled using a range of 0 – 10,000.Image Selection/FilteringA number of fields are available for filtering, including Acquisition Date, Estimated Cloud Cover, and Product ID.To isolate and work with specific images, either use the ‘Image Filter’ to create custom layers or add a ‘Query Filter’ to restrict the default layer display to a specified image or group of images.Visual RenderingDefault rendering is Normalized Difference Moisture Index Colorized, calculated as (b5 - b6)/(b5 + b6) with a colormap applied. Wetlands and moist areas are blues, and dry areas in deep yellow and brown.Raster Functions enable on-the-fly rendering of band combinations and calculated indices from the source imagery.The DRA version of each layer enables visualization of the full dynamic range of the images.Other pre-defined Raster Functions can be selected via the renderer drop-down or custom functions can be created.This layer is part of a larger collection of Landsat Imagery Layers that you can use to perform a variety of mapping analysis tasks.Pre-defined functions: Natural Color with DRA, Agriculture with DRA, Geology with DRA, Color Infrared with DRA, Bathymetric with DRA, Short-wave Infrared with DRA, Normalized Difference Moisture Index Colorized, NDVI Raw, NDVI Colorized, NBR Raw15 meter Landsat Imagery Layers are also available: Panchromatic and Pansharpened.Multispectral BandsThe table below lists all available multispectral OLI bands. Normalized Difference Moisture Index consumes bands 5 and 6.BandDescriptionWavelength (µm)Spatial Resolution (m)1Coastal aerosol0.43 - 0.45302Blue0.45 - 0.51303Green0.53 - 0.59304Red0.64 - 0.67305Near Infrared (NIR)0.85 - 0.88306SWIR 11.57 - 1.65307SWIR 22.11 - 2.29308Cirrus (in OLI this is band 9)1.36 - 1.38309QA Band (available with Collection 1)*NA30*More about the Quality Assessment BandTIRS BandsBandDescriptionWavelength (µm)Spatial Resolution (m)10TIRS110.60 - 11.19100 * (30)11TIRS211.50 - 12.51100 * (30)*TIRS bands are acquired at 100 meter resolution, but are resampled to 30 meter in delivered data product.Additional Usage NotesImage exports are limited to 4,000 columns x 4,000 rows per request.This dynamic imagery layer can be used in Web Maps and ArcGIS Pro as well as web and mobile applications using the ArcGIS REST APIs.WCS and WMS compatibility means this imagery layer can be consumed as WCS or WMS services.The Landsat Explorer App is another way to access and explore the imagery.This layer is part of a larger collection of Landsat Imagery Layers.Data SourceLandsat imagery is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). Data is hosted by the Amazon Web Services as part of their Public Data Sets program.For information, see Landsat 8 and Landsat 9.
This visualization product displays the density of floating micro-litter per net normalized in grams per km² per year from research and monitoring protocols. EMODnet Chemistry included the collection of marine litter in its 3rd phase. Before 2021, there was no coordinated effort at the regional or European scale for micro-litter. Given this situation, EMODnet Chemistry proposed to adopt the data gathering and data management approach as generally applied for marine data, i.e., populating metadata and data in the CDI Data Discovery and Access service using dedicated SeaDataNet data transport formats. EMODnet Chemistry is currently the official EU collector of micro-litter data from Marine Strategy Framework Directive (MSFD) National Monitoring activities (descriptor 10). A series of specific standard vocabularies or standard terms related to micro-litter have been added to SeaDataNet NVS (NERC Vocabulary Server) Common Vocabularies to describe the micro-litter. European micro-litter data are collected by the National Oceanographic Data Centres (NODCs). Micro-litter map products are generated from NODCs data after a test of the aggregated collection including data and data format checks and data harmonization.
A filter is applied to represent only micro-litter sampled according to research and monitoring protocols as MSFD monitoring.
Densities were calculated for each net using the following calculation: Density (weight of particles per km²) = Micro-litter weight / (Sampling effort (km) * Net opening (cm) * 0.00001)
When information about the sampling effort (km) was lacking and point coordinates were known (start and end of the sampling), the sampling effort was calculated using the PostGIS ST_DistanceSpheroid function with a WGS84 measurement spheroid. When the weight of microlitters or the net opening was not filled, it was not possible to calculate the density.
Percentiles 50, 75, 95 & 99 have been calculated taking into account data for all years.
Warning: the absence of data on the map does not necessarily mean that they do not exist, but that no information has been entered in the National Oceanographic Data Centre (NODC) for this area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Residential Existing Homes (One to Four Units) Energy Efficiency Meter Evaluated Project Data: 2007 – 2012’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/f0c9c585-5788-4b49-83b9-3733ea7b5e30 on 12 February 2022.
--- Dataset description provided by original source is as follows ---
IMPORTANT! PLEASE READ DISCLAIMER BEFORE USING DATA. This dataset backcasts estimated modeled savings for a subset of 2007-2012 completed projects in the Home Performance with ENERGY STAR® Program against normalized savings calculated by an open source energy efficiency meter available at https://www.openee.io/. Open source code uses utility-grade metered consumption to weather-normalize the pre- and post-consumption data using standard methods with no discretionary independent variables. The open source energy efficiency meter allows private companies, utilities, and regulators to calculate energy savings from energy efficiency retrofits with increased confidence and replicability of results. This dataset is intended to lay a foundation for future innovation and deployment of the open source energy efficiency meter across the residential energy sector, and to help inform stakeholders interested in pay for performance programs, where providers are paid for realizing measurable weather-normalized results. To download the open source code, please visit the website at https://github.com/openeemeter/eemeter/releases
D I S C L A I M E R: Normalized Savings using open source OEE meter. Several data elements, including, Evaluated Annual Elecric Savings (kWh), Evaluated Annual Gas Savings (MMBtu), Pre-retrofit Baseline Electric (kWh), Pre-retrofit Baseline Gas (MMBtu), Post-retrofit Usage Electric (kWh), and Post-retrofit Usage Gas (MMBtu) are direct outputs from the open source OEE meter.
Home Performance with ENERGY STAR® Estimated Savings. Several data elements, including, Estimated Annual kWh Savings, Estimated Annual MMBtu Savings, and Estimated First Year Energy Savings represent contractor-reported savings derived from energy modeling software calculations and not actual realized energy savings. The accuracy of the Estimated Annual kWh Savings and Estimated Annual MMBtu Savings for projects has been evaluated by an independent third party. The results of the Home Performance with ENERGY STAR impact analysis indicate that, on average, actual savings amount to 35 percent of the Estimated Annual kWh Savings and 65 percent of the Estimated Annual MMBtu Savings. For more information, please refer to the Evaluation Report published on NYSERDA’s website at: http://www.nyserda.ny.gov/-/media/Files/Publications/PPSER/Program-Evaluation/2012ContractorReports/2012-HPwES-Impact-Report-with-Appendices.pdf.
This dataset includes the following data points for a subset of projects completed in 2007-2012: Contractor ID, Project County, Project City, Project ZIP, Climate Zone, Weather Station, Weather Station-Normalization, Project Completion Date, Customer Type, Size of Home, Volume of Home, Number of Units, Year Home Built, Total Project Cost, Contractor Incentive, Total Incentives, Amount Financed through Program, Estimated Annual kWh Savings, Estimated Annual MMBtu Savings, Estimated First Year Energy Savings, Evaluated Annual Electric Savings (kWh), Evaluated Annual Gas Savings (MMBtu), Pre-retrofit Baseline Electric (kWh), Pre-retrofit Baseline Gas (MMBtu), Post-retrofit Usage Electric (kWh), Post-retrofit Usage Gas (MMBtu), Central Hudson, Consolidated Edison, LIPA, National Grid, National Fuel Gas, New York State Electric and Gas, Orange and Rockland, Rochester Gas and Electric.
How does your organization use this dataset? What other NYSERDA or energy-related datasets would you like to see on Open NY? Let us know by emailing OpenNY@nyserda.ny.gov.
--- Original source retains full ownership of the source dataset ---
The normalized digital surface model is the result of the difference calculation between the DOM (raster data) and DTM (raster data) from 2014 (LiDAR). The data has a resolution of 50 cm.
The dataset presented corresponds to a study investigating mixture toxicity between two common aquatic contaminants, microcystin-LR (MCLR) and benzo[a]pyrene (BaP), on the detoxification system of fish. We used the Poeciliopsis lucida hepatocellular carcinoma (PLHC-1) cell line as a model for fish liver cells.
Cells were exposed to MCLR (0.01, 1 µM), BaP (0.01, 0.1, 1 µM), or combinations of both chemicals for periods ranging from 1 to 48 hours. We measured the following endpoints: - Cytochrome P450 1A (CYP1A) function and regulation, assessing ethoxy resorufin-O-deethylase (EROD) activity and CYP1A mRNA expression. EROD activity was normalized to the protein. - P-glycoprotein (Pgp) function and expression, using the rhodamine 123 (Rh123) accumulation assay and Pgp mRNA expression. Rh123 accumulation was normalized to the protein. - Cell cytotoxicity, using two fluorescent indicators, 5-Carboxyfluorescein Diacetate-Acetoxymethyl Ester (CFDA-AM) and Alamar Blue (AB).
The dataset includes the following information:
1) CYP1A Activity: This section contains raw data used to calculate CYP1A activity, expressed as specific EROD activity. The EROD assay is based on the ability of the CYP1A enzyme to catalyze the O-deethylation of ethoxyresorufin to resorufin, a fluorescent product. Resorufin production was monitored over time and quantified using a resorufin standard curve. Each sample value was normalized to total protein content, which was measured using fluorescamine, and bovine serum albumin (BSA) as the protein standard. The files include: - EROD slopes for each sample - Protein concentration per sample - Specific EROD activity - Standard curves (for BSA and resorufin). Additionally, the dataset includes results from an experiment involving pre-induction of EROD activity using β-naphthoflavone (BNF). Detailed descriptions of measurement conditions and calculation methods are provided in the accompanying README file (EROD_readme).
2) Rhodamine 123 Accumulation: This section includes raw data for calculating rhodamine 123 (Rh123) accumulation in cells, expressed as fluorescent units (FU) per mg of protein. The Rh123 accumulation assay measures the ability of the P-glycoprotein (Pgp) transporter to efflux this fluorescent compound from cells. When chemicals interfere with Pgp transport function, Rh123 accumulates within the cells. The files include: - Rh123 fluorescence values (FU) per sample - Protein concentration per sample - BSA standard curves for protein quantification Detailed descriptions of the measurement conditions and calculations are provided in the accompanying README file (Rh123_readme).
3) Cytotoxicity: Cytotoxicity was assessed by measuring mitochondrial activity and cell membrane integrity using two fluorescent indicators, Alamar Blue (AB) for mitochondrial activity and CFDA-AM for membrane integrity. This section includes raw fluorescence data (in fluorescent units, FU), which were used to express cytotoxicity as a percentage of FU relative to the control treatment. Detailed descriptions of measurement conditions and calculations are provided in the README file (cytotox_readme).
4) qPCR Data: This section contains raw cycle threshold (Ct) values from real-time polymerase chain reaction (qPCR) used to estimate the mRNA levels of target genes (CYP1A and Pgp). The data is normalized to a reference housekeeping gene, 18S, and presented as fold increase relative to the control treatment. Detailed descriptions of measurement conditions and normalization calculations are provided in the README file (qPCR_readme).
Lipidomics is an emerging field with significant potential for improving clinical diagnosis and our understanding of health and disease. While the diverse biological roles of lipids contribute to their clinical utility, the unavailability of lipid internal standards representing each species, make lipid quantitation analytically challenging. The common approach is to employ one or more internal standards for each lipid class examined and use a single point calibration for normalization (relative quantitation). To aid in standardizing and automating this relative quantitation process, we developed LipidMatch Normalizer (LMN) http://secim.ufl.edu/secim-tools/ which can be used in most open source lipidomics workflows. While the effect of lipid structure on relative quantitation has been investigated, applying LMN we show that data-processing can significantly affect lipid semi-quantitative amounts. Polarity and adduct choice had the greatest effect on normalized levels; when calculated using positive versus negative ion mode data, one fourth of lipids had greater than 50 % difference in normalized levels. Based on our study, sodium adducts should not be used for statistics when sodium is not added intentionally to the system, as lipid levels calculated using sodium adducts did not correlate with lipid levels calculated using any other adduct. Relative quantification using smoothing versus not smoothing, and peak area versus peak height, showed minimal differences, except when using peak area for overlapping isomers which were difficult to deconvolute. By characterizing sources or variation introduced during data-processing and introducing automated tools, this work helps increase through-put and improve data-quality for determining relative changes across groups.
Dust deposition in the Southern Ocean constitutes a critical modulator of past global climate variability, but how it has varied temporally and geographically is underdetermined. Here, we present data sets of glacial-interglacial dust-supply cycles from the largest Southern Ocean sector, the polar South Pacific, indicating three times higher dust deposition during glacial periods than during interglacials for the past million years. Although the most likely dust source for the South Pacific is Australia and New Zealand, the glacial-interglacial pattern and timing of lithogenic sediment deposition is similar to dust records from Antarctica and the South Atlantic dominated by Patagonian sources. These similarities imply large-scale common climate forcings such as latitudinal shifts of the southern westerlies and regionally enhanced glaciogenic dust mobilization in New Zealand and Patagonia.
The values in this raster are unit-less scores ranging from 0 to 1 that represent normalized dollars per acre damage claims from elk on Wyoming lands. This raster is one of 9 inputs used to calculate the "Normalized Importance Index."
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Two-tailed Mann-Whitney U test was used for calculating p-values. The statistical power was calculated for a Student's t-test using statistical parameters of log2 transformed expression data. Sample size (n) is number per group needed to obtain a power of at least 0.8. Nppb: group 2 vs. group 3; Vcam1: group 1 vs. group 3.