100+ datasets found
  1. f

    LC-MS analysis data (average of normalized values ± standard error).

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Apr 26, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Van Cura, Matthew; Lohmar, Jessica M.; Puel, Olivier; Myers, Ryan; Nepal, Binita; Calvo, Ana M.; Thompson, Brett (2019). LC-MS analysis data (average of normalized values ± standard error). [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000115298
    Explore at:
    Dataset updated
    Apr 26, 2019
    Authors
    Van Cura, Matthew; Lohmar, Jessica M.; Puel, Olivier; Myers, Ryan; Nepal, Binita; Calvo, Ana M.; Thompson, Brett
    Description

    LC-MS analysis data (average of normalized values ± standard error).

  2. Normalized Somalogic proteome expression values

    • figshare.com
    txt
    Updated Nov 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Klaus Schughart (2024). Normalized Somalogic proteome expression values [Dataset]. http://doi.org/10.6084/m9.figshare.27826857.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 18, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Klaus Schughart
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EDTA blood samples were collected from participants, cells were centrifuged, and supernatants and pellets were stored at -80 o C until analysis. Plasma was centrifuged for 15 min at 2200 x g, and 60 ul of supernatant was used for used for the SOMAscan assay performed by SomaLogic, Boulder, CO as described previously (Gold et al., 2010;Han et al., 2018;Tin et al., 2019;Yang et al., 2020). Raw signals were then normalized as described (Gold et al., 2010;Han et al., 2018). These steps include hybridization normalization, plate scaling and calibration, and the adaptive normalization by maximum likelihood (ANML), which normalizes SomaScan EDTA plasma measurements to a healthy U.S. population reference, and then log2 transformed, (resulting in data file: sbst3_norm_SIG_Somalogic_UTHSC_2021_291122.txt.

  3. h

    episodic-value-normalized

    • huggingface.co
    Updated Sep 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Tang (2025). episodic-value-normalized [Dataset]. https://huggingface.co/datasets/windfromthenorth/episodic-value-normalized
    Explore at:
    Dataset updated
    Sep 6, 2025
    Authors
    Andy Tang
    Description

    windfromthenorth/episodic-value-normalized dataset hosted on Hugging Face and contributed by the HF Datasets community

  4. H

    Price Optimization (Normalized)

    • dataverse.harvard.edu
    • search.dataone.org
    Updated May 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diomar Anez; Dimar Anez (2025). Price Optimization (Normalized) [Dataset]. http://doi.org/10.7910/DVN/URFT2I
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 6, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Diomar Anez; Dimar Anez
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset provides processed and normalized/standardized indices for the management tool group focused on 'Price Optimization', including related concepts like Dynamic Pricing and Price Optimization Models. Derived from five distinct raw data sources, these indices are specifically designed for comparative longitudinal analysis, enabling the examination of trends and relationships across different empirical domains (web search, literature, academic publishing, and executive adoption). The data presented here represent transformed versions of the original source data, aimed at achieving metric comparability. Users requiring the unprocessed source data should consult the corresponding Price Optimization dataset in the Management Tool Source Data (Raw Extracts) Dataverse. Data Files and Processing Methodologies: Google Trends File (Prefix: GT_): Normalized Relative Search Interest (RSI) Input Data: Native monthly RSI values from Google Trends (Jan 2004 - Jan 2025) for the query "price optimization" + "dynamic pricing" + "price optimization strategy". Processing: None. Utilizes the original base-100 normalized Google Trends index. Output Metric: Monthly Normalized RSI (Base 100). Frequency: Monthly. Google Books Ngram Viewer File (Prefix: GB_): Normalized Relative Frequency Input Data: Annual relative frequency values from Google Books Ngram Viewer (1950-2022, English corpus, no smoothing) for the query Price Optimization + Pricing Optimization + Dynamic Pricing Models + Optimal Pricing + Dynamic Pricing. Processing: Annual relative frequency series normalized (peak year = 100). Output Metric: Annual Normalized Relative Frequency Index (Base 100). Frequency: Annual. Crossref.org File (Prefix: CR_): Normalized Relative Publication Share Index Input Data: Absolute monthly publication counts matching Price Optimization-related keywords [("price optimization" OR ...) AND (...) - see raw data for full query] in titles/abstracts (1950-2025), alongside total monthly Crossref publications. Deduplicated via DOIs. Processing: Monthly relative share calculated (Price Opt. Count / Total Count). Monthly relative share series normalized (peak month's share = 100). Output Metric: Monthly Normalized Relative Publication Share Index (Base 100). Frequency: Monthly. Bain & Co. Survey - Usability File (Prefix: BU_): Normalized Usability Index Input Data: Original usability percentages (%) from Bain surveys for specific years: Price Optimization Models (2004, 2008, 2010, 2012, 2014, 2017). Note: Not reported before 2004 or after 2017. Processing: Normalization: Original usability percentages normalized relative to its historical peak (Max % = 100). Output Metric: Biennial Estimated Normalized Usability Index (Base 100 relative to historical peak). Frequency: Biennial (Approx.). Bain & Co. Survey - Satisfaction File (Prefix: BS_): Standardized Satisfaction Index Input Data: Original average satisfaction scores (1-5 scale) from Bain surveys for specific years: Price Optimization Models (2004-2017). Note: Not reported before 2004 or after 2017. Processing: Standardization (Z-scores): Using Z = (X - 3.0) / 0.891609. Index Scale Transformation: Index = 50 + (Z * 22). Output Metric: Biennial Standardized Satisfaction Index (Center=50, Range?[1,100]). Frequency: Biennial (Approx.). File Naming Convention: Files generally follow the pattern: PREFIX_Tool_Processed.csv or similar, where the PREFIX indicates the data source (GT_, GB_, CR_, BU_, BS_). Consult the parent Dataverse description (Management Tool Comparative Indices) for general context and the methodological disclaimer. For original extraction details (specific keywords, URLs, etc.), refer to the corresponding Price Optimization dataset in the Raw Extracts Dataverse. Comprehensive project documentation provides full details on all processing steps.

  5. n

    Data from: A systematic evaluation of normalization methods and probe...

    • data.niaid.nih.gov
    • dataone.org
    • +2more
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    H. Welsh; C. M. P. F. Batalha; W. Li; K. L. Mpye; N. C. Souza-Pinto; M. S. Naslavsky; E. J. Parra (2023). A systematic evaluation of normalization methods and probe replicability using infinium EPIC methylation data [Dataset]. http://doi.org/10.5061/dryad.cnp5hqc7v
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Universidade de São Paulo
    Hospital for Sick Children
    University of Toronto
    Authors
    H. Welsh; C. M. P. F. Batalha; W. Li; K. L. Mpye; N. C. Souza-Pinto; M. S. Naslavsky; E. J. Parra
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Background The Infinium EPIC array measures the methylation status of > 850,000 CpG sites. The EPIC BeadChip uses a two-array design: Infinium Type I and Type II probes. These probe types exhibit different technical characteristics which may confound analyses. Numerous normalization and pre-processing methods have been developed to reduce probe type bias as well as other issues such as background and dye bias.
    Methods This study evaluates the performance of various normalization methods using 16 replicated samples and three metrics: absolute beta-value difference, overlap of non-replicated CpGs between replicate pairs, and effect on beta-value distributions. Additionally, we carried out Pearson’s correlation and intraclass correlation coefficient (ICC) analyses using both raw and SeSAMe 2 normalized data.
    Results The method we define as SeSAMe 2, which consists of the application of the regular SeSAMe pipeline with an additional round of QC, pOOBAH masking, was found to be the best-performing normalization method, while quantile-based methods were found to be the worst performing methods. Whole-array Pearson’s correlations were found to be high. However, in agreement with previous studies, a substantial proportion of the probes on the EPIC array showed poor reproducibility (ICC < 0.50). The majority of poor-performing probes have beta values close to either 0 or 1, and relatively low standard deviations. These results suggest that probe reliability is largely the result of limited biological variation rather than technical measurement variation. Importantly, normalizing the data with SeSAMe 2 dramatically improved ICC estimates, with the proportion of probes with ICC values > 0.50 increasing from 45.18% (raw data) to 61.35% (SeSAMe 2). Methods

    Study Participants and Samples

    The whole blood samples were obtained from the Health, Well-being and Aging (Saúde, Ben-estar e Envelhecimento, SABE) study cohort. SABE is a cohort of census-withdrawn elderly from the city of São Paulo, Brazil, followed up every five years since the year 2000, with DNA first collected in 2010. Samples from 24 elderly adults were collected at two time points for a total of 48 samples. The first time point is the 2010 collection wave, performed from 2010 to 2012, and the second time point was set in 2020 in a COVID-19 monitoring project (9±0.71 years apart). The 24 individuals were 67.41±5.52 years of age (mean ± standard deviation) at time point one; and 76.41±6.17 at time point two and comprised 13 men and 11 women.

    All individuals enrolled in the SABE cohort provided written consent, and the ethic protocols were approved by local and national institutional review boards COEP/FSP/USP OF.COEP/23/10, CONEP 2044/2014, CEP HIAE 1263-10, University of Toronto RIS 39685.

    Blood Collection and Processing

    Genomic DNA was extracted from whole peripheral blood samples collected in EDTA tubes. DNA extraction and purification followed manufacturer’s recommended protocols, using Qiagen AutoPure LS kit with Gentra automated extraction (first time point) or manual extraction (second time point), due to discontinuation of the equipment but using the same commercial reagents. DNA was quantified using Nanodrop spectrometer and diluted to 50ng/uL. To assess the reproducibility of the EPIC array, we also obtained technical replicates for 16 out of the 48 samples, for a total of 64 samples submitted for further analyses. Whole Genome Sequencing data is also available for the samples described above.

    Characterization of DNA Methylation using the EPIC array

    Approximately 1,000ng of human genomic DNA was used for bisulphite conversion. Methylation status was evaluated using the MethylationEPIC array at The Centre for Applied Genomics (TCAG, Hospital for Sick Children, Toronto, Ontario, Canada), following protocols recommended by Illumina (San Diego, California, USA).

    Processing and Analysis of DNA Methylation Data

    The R/Bioconductor packages Meffil (version 1.1.0), RnBeads (version 2.6.0), minfi (version 1.34.0) and wateRmelon (version 1.32.0) were used to import, process and perform quality control (QC) analyses on the methylation data. Starting with the 64 samples, we first used Meffil to infer the sex of the 64 samples and compared the inferred sex to reported sex. Utilizing the 59 SNP probes that are available as part of the EPIC array, we calculated concordance between the methylation intensities of the samples and the corresponding genotype calls extracted from their WGS data. We then performed comprehensive sample-level and probe-level QC using the RnBeads QC pipeline. Specifically, we (1) removed probes if their target sequences overlap with a SNP at any base, (2) removed known cross-reactive probes (3) used the iterative Greedycut algorithm to filter out samples and probes, using a detection p-value threshold of 0.01 and (4) removed probes if more than 5% of the samples having a missing value. Since RnBeads does not have a function to perform probe filtering based on bead number, we used the wateRmelon package to extract bead numbers from the IDAT files and calculated the proportion of samples with bead number < 3. Probes with more than 5% of samples having low bead number (< 3) were removed. For the comparison of normalization methods, we also computed detection p-values using out-of-band probes empirical distribution with the pOOBAH() function in the SeSAMe (version 1.14.2) R package, with a p-value threshold of 0.05, and the combine.neg parameter set to TRUE. In the scenario where pOOBAH filtering was carried out, it was done in parallel with the previously mentioned QC steps, and the resulting probes flagged in both analyses were combined and removed from the data.

    Normalization Methods Evaluated

    The normalization methods compared in this study were implemented using different R/Bioconductor packages and are summarized in Figure 1. All data was read into R workspace as RG Channel Sets using minfi’s read.metharray.exp() function. One sample that was flagged during QC was removed, and further normalization steps were carried out in the remaining set of 63 samples. Prior to all normalizations with minfi, probes that did not pass QC were removed. Noob, SWAN, Quantile, Funnorm and Illumina normalizations were implemented using minfi. BMIQ normalization was implemented with ChAMP (version 2.26.0), using as input Raw data produced by minfi’s preprocessRaw() function. In the combination of Noob with BMIQ (Noob+BMIQ), BMIQ normalization was carried out using as input minfi’s Noob normalized data. Noob normalization was also implemented with SeSAMe, using a nonlinear dye bias correction. For SeSAMe normalization, two scenarios were tested. For both, the inputs were unmasked SigDF Sets converted from minfi’s RG Channel Sets. In the first, which we call “SeSAMe 1”, SeSAMe’s pOOBAH masking was not executed, and the only probes filtered out of the dataset prior to normalization were the ones that did not pass QC in the previous analyses. In the second scenario, which we call “SeSAMe 2”, pOOBAH masking was carried out in the unfiltered dataset, and masked probes were removed. This removal was followed by further removal of probes that did not pass previous QC, and that had not been removed by pOOBAH. Therefore, SeSAMe 2 has two rounds of probe removal. Noob normalization with nonlinear dye bias correction was then carried out in the filtered dataset. Methods were then compared by subsetting the 16 replicated samples and evaluating the effects that the different normalization methods had in the absolute difference of beta values (|β|) between replicated samples.

  6. d

    Mission and Vision Statements (Normalized)

    • search.dataone.org
    • datasetcatalog.nlm.nih.gov
    Updated Oct 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anez, Diomar; Anez, Dimar (2025). Mission and Vision Statements (Normalized) [Dataset]. http://doi.org/10.7910/DVN/SFKSW0
    Explore at:
    Dataset updated
    Oct 29, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Anez, Diomar; Anez, Dimar
    Description

    This dataset provides processed and normalized/standardized indices for the management tool group focused on 'Mission and Vision Statements', including related concepts like Purpose Statements. Derived from five distinct raw data sources, these indices are specifically designed for comparative longitudinal analysis, enabling the examination of trends and relationships across different empirical domains (web search, literature, academic publishing, and executive adoption). The data presented here represent transformed versions of the original source data, aimed at achieving metric comparability. Users requiring the unprocessed source data should consult the corresponding Mission/Vision dataset in the Management Tool Source Data (Raw Extracts) Dataverse. Data Files and Processing Methodologies: Google Trends File (Prefix: GT_): Normalized Relative Search Interest (RSI) Input Data: Native monthly RSI values from Google Trends (Jan 2004 - Jan 2025) for the query "mission statement" + "vision statement" + "mission and vision corporate". Processing: None. Utilizes the original base-100 normalized Google Trends index. Output Metric: Monthly Normalized RSI (Base 100). Frequency: Monthly. Google Books Ngram Viewer File (Prefix: GB_): Normalized Relative Frequency Input Data: Annual relative frequency values from Google Books Ngram Viewer (1950-2022, English corpus, no smoothing) for the query Mission Statements + Vision Statements + Purpose Statements + Mission and Vision. Processing: Annual relative frequency series normalized (peak year = 100). Output Metric: Annual Normalized Relative Frequency Index (Base 100). Frequency: Annual. Crossref.org File (Prefix: CR_): Normalized Relative Publication Share Index Input Data: Absolute monthly publication counts matching Mission/Vision-related keywords [("mission statement" OR ...) AND (...) - see raw data for full query] in titles/abstracts (1950-2025), alongside total monthly Crossref publications. Deduplicated via DOIs. Processing: Monthly relative share calculated (Mission/Vision Count / Total Count). Monthly relative share series normalized (peak month's share = 100). Output Metric: Monthly Normalized Relative Publication Share Index (Base 100). Frequency: Monthly. Bain & Co. Survey - Usability File (Prefix: BU_): Normalized Usability Index Input Data: Original usability percentages (%) from Bain surveys for specific years: Mission/Vision (1993); Mission Statements (1996); Mission and Vision Statements (1999-2017); Purpose, Mission, and Vision Statements (2022). Processing: Semantic Grouping: Data points across the different naming conventions were treated as a single conceptual series. Normalization: Combined series normalized relative to its historical peak (Max % = 100). Output Metric: Biennial Estimated Normalized Usability Index (Base 100 relative to historical peak). Frequency: Biennial (Approx.). Bain & Co. Survey - Satisfaction File (Prefix: BS_): Standardized Satisfaction Index Input Data: Original average satisfaction scores (1-5 scale) from Bain surveys for specific years (same names/years as Usability). Processing: Semantic Grouping: Data points treated as a single conceptual series. Standardization (Z-scores): Using Z = (X - 3.0) / 0.891609. Index Scale Transformation: Index = 50 + (Z * 22). Output Metric: Biennial Standardized Satisfaction Index (Center=50, Range?[1,100]). Frequency: Biennial (Approx.). File Naming Convention: Files generally follow the pattern: PREFIX_Tool_Processed.csv or similar, where the PREFIX indicates the data source (GT_, GB_, CR_, BU_, BS_). Consult the parent Dataverse description (Management Tool Comparative Indices) for general context and the methodological disclaimer. For original extraction details (specific keywords, URLs, etc.), refer to the corresponding Mission/Vision dataset in the Raw Extracts Dataverse. Comprehensive project documentation provides full details on all processing steps.

  7. DIP_DISPERSION_INDICATOR - Normalized value of the dip dispersion value. The...

    • petrocurve.com
    Updated Jun 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Schlumberger (2025). DIP_DISPERSION_INDICATOR - Normalized value of the dip dispersion value. The lower it is the upper the dispersion is. [Dataset]. https://petrocurve.com/curve/dip_dispersion_indicator-schlumberger
    Explore at:
    Dataset updated
    Jun 14, 2025
    Dataset provided by
    SLBhttp://slb.com/
    Authors
    Schlumberger
    Description

    Normalized value of the dip dispersion value. The lower it is the upper the dispersion is. curve from Schlumberger. Measured in unitless.

  8. d

    WLCI - Important Agricultural Lands Assessment (Input Raster: Normalized...

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Oct 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). WLCI - Important Agricultural Lands Assessment (Input Raster: Normalized Antelope Damage Claims) [Dataset]. https://catalog.data.gov/dataset/wlci-important-agricultural-lands-assessment-input-raster-normalized-antelope-damage-claim
    Explore at:
    Dataset updated
    Oct 30, 2025
    Dataset provided by
    U.S. Geological Survey
    Description

    The values in this raster are unit-less scores ranging from 0 to 1 that represent normalized dollars per acre damage claims from antelope on Wyoming lands. This raster is one of 9 inputs used to calculate the "Normalized Importance Index."

  9. H

    Growth Strategies (Normalized)

    • datasetcatalog.nlm.nih.gov
    • search.dataone.org
    Updated May 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anez, Diomar; Anez, Dimar (2025). Growth Strategies (Normalized) [Dataset]. http://doi.org/10.7910/DVN/OW8GOW
    Explore at:
    Dataset updated
    May 6, 2025
    Authors
    Anez, Diomar; Anez, Dimar
    Description

    This dataset provides processed and normalized/standardized indices for the management tool group focused on 'Growth Strategies'. Derived from five distinct raw data sources, these indices are specifically designed for comparative longitudinal analysis, enabling the examination of trends and relationships across different empirical domains (web search, literature, academic publishing, and executive adoption). The data presented here represent transformed versions of the original source data, aimed at achieving metric comparability. Users requiring the unprocessed source data should consult the corresponding Growth Strategies dataset in the Management Tool Source Data (Raw Extracts) Dataverse. Data Files and Processing Methodologies: Google Trends File (Prefix: GT_): Normalized Relative Search Interest (RSI) Input Data: Native monthly RSI values from Google Trends (Jan 2004 - Jan 2025) for the query "growth strategies" + "growth strategy" + "growth strategies business". Processing: None. Utilizes the original base-100 normalized Google Trends index. Output Metric: Monthly Normalized RSI (Base 100). Frequency: Monthly. Google Books Ngram Viewer File (Prefix: GB_): Normalized Relative Frequency Input Data: Annual relative frequency values from Google Books Ngram Viewer (1950-2022, English corpus, no smoothing) for the query Growth Strategies + Growth Strategy. Processing: Annual relative frequency series normalized (peak year = 100). Output Metric: Annual Normalized Relative Frequency Index (Base 100). Frequency: Annual. Crossref.org File (Prefix: CR_): Normalized Relative Publication Share Index Input Data: Absolute monthly publication counts matching Growth Strategies-related keywords [("growth strategies" OR ...) AND (...) - see raw data for full query] in titles/abstracts (1950-2025), alongside total monthly Crossref publications. Deduplicated via DOIs. Processing: Monthly relative share calculated (Growth Strat. Count / Total Count). Monthly relative share series normalized (peak month's share = 100). Output Metric: Monthly Normalized Relative Publication Share Index (Base 100). Frequency: Monthly. Bain & Co. Survey - Usability File (Prefix: BU_): Normalized Usability Index Input Data: Original usability percentages (%) from Bain surveys for specific years: Growth Strategies (1996, 1999, 2000, 2002, 2004); Growth Strategy Tools (2006, 2008). Note: Not reported after 2008. Processing: Semantic Grouping: Data points for "Growth Strategies" and "Growth Strategy Tools" were treated as a single conceptual series. Normalization: Combined series normalized relative to its historical peak (Max % = 100). Output Metric: Biennial Estimated Normalized Usability Index (Base 100 relative to historical peak). Frequency: Biennial (Approx.). Bain & Co. Survey - Satisfaction File (Prefix: BS_): Standardized Satisfaction Index Input Data: Original average satisfaction scores (1-5 scale) from Bain surveys for specific years: Growth Strategies (1996-2004); Growth Strategy Tools (2006, 2008). Note: Not reported after 2008. Processing: Semantic Grouping: Data points treated as a single conceptual series. Standardization (Z-scores): Using Z = (X - 3.0) / 0.891609. Index Scale Transformation: Index = 50 + (Z * 22). Output Metric: Biennial Standardized Satisfaction Index (Center=50, Range?[1,100]). Frequency: Biennial (Approx.). File Naming Convention: Files generally follow the pattern: PREFIX_Tool_Processed.csv or similar, where the PREFIX indicates the data source (GT_, GB_, CR_, BU_, BS_). Consult the parent Dataverse description (Management Tool Comparative Indices) for general context and the methodological disclaimer. For original extraction details (specific keywords, URLs, etc.), refer to the corresponding Growth Strategies dataset in the Raw Extracts Dataverse. Comprehensive project documentation provides full details on all processing steps.

  10. Brain volumetric comparisons with normalized values.

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gilberto Sousa Alves; Laurence O’Dwyer; Alina Jurcoane; Viola Oertel-Knöchel; Christian Knöchel; David Prvulovic; Felipe Sudo; Carlos Eduardo Alves; Letice Valente; Denise Moreira; Fabian Fuβer; Tarik Karakaya; Johannes Pantel; Eliasz Engelhardt; Jerson Laks (2023). Brain volumetric comparisons with normalized values. [Dataset]. http://doi.org/10.1371/journal.pone.0052859.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Gilberto Sousa Alves; Laurence O’Dwyer; Alina Jurcoane; Viola Oertel-Knöchel; Christian Knöchel; David Prvulovic; Felipe Sudo; Carlos Eduardo Alves; Letice Valente; Denise Moreira; Fabian Fuβer; Tarik Karakaya; Johannes Pantel; Eliasz Engelhardt; Jerson Laks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    WMH = white matter hyperintensities; CN = Cognitively Normal; MCI = Mild Cognitive Impairment; AD = Alzheimer Dementia; volume expressed in mm3 (1) and ml (2).Post hoc analysis: *AD vs CN, P = 0.004; † AD vs CN, P = 0.013; † AD vs MCI, P = 0.009; § AD vs CN, P = 0.001; §AD vs MCI, P = 0.012; values are displayed as mean ± standard deviations;

  11. Meal Correlations

    • kaggle.com
    • data.mendeley.com
    zip
    Updated Aug 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jocelyn Dumlao (2023). Meal Correlations [Dataset]. https://www.kaggle.com/datasets/jocelyndumlao/meal-correlations
    Explore at:
    zip(124395 bytes)Available download formats
    Dataset updated
    Aug 18, 2023
    Authors
    Jocelyn Dumlao
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Description:

    Weighed the intake of lunch and dinner for three days, Monday, Wednesday, and Friday. Data were normalized to reduce the Yule-Simpson effect.

    Attribute Information:

    1. dinner: The amount of dinner consumed.
    2. lunch: The amount of lunch consumed.
    3. account: Account-related information associated with the data.
    4. time: The time value associated with the data.
    5. name: Identifier for individuals.
    6. Intake: Total intake value for the individual.
    7. NormalizedMeals: Normalized value of meals for the individual.
    8. Day: The day of the week when the data was recorded.
    9. Meal: Type of meal (e.g., 'D' for dinner).
    10. z-D: A numerical value associated with the 'Day' feature.
    11. z-L: A numerical value associated with the 'Meal' feature.

    Categories:

    Food Intake

    Acknowledgements & Source:

    David Levitsky

    Institutions: Cornell University

    Data Source

    view details

  12. f

    Data File 3.csv

    • figshare.com
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haisong Xu (2023). Data File 3.csv [Dataset]. http://doi.org/10.6084/m9.figshare.5387725.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Optica Publishing Group
    Authors
    Haisong Xu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This CSV file includes the resulting normalized interval scale values of naturalness, colorfulness, and preference at the four correlated color temperatures.

  13. d

    Knowledge Management (Normalized)

    • search.dataone.org
    • datasetcatalog.nlm.nih.gov
    Updated Oct 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anez, Diomar; Anez, Dimar (2025). Knowledge Management (Normalized) [Dataset]. http://doi.org/10.7910/DVN/BAPIEP
    Explore at:
    Dataset updated
    Oct 29, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Anez, Diomar; Anez, Dimar
    Description

    This dataset provides processed and normalized/standardized indices for the management tool 'Knowledge Management' (KM), including related concepts like Intellectual Capital Management and Knowledge Transfer. Derived from five distinct raw data sources, these indices are specifically designed for comparative longitudinal analysis, enabling the examination of trends and relationships across different empirical domains (web search, literature, academic publishing, and executive adoption). The data presented here represent transformed versions of the original source data, aimed at achieving metric comparability. Users requiring the unprocessed source data should consult the corresponding KM dataset in the Management Tool Source Data (Raw Extracts) Dataverse. Data Files and Processing Methodologies: Google Trends File (Prefix: GT_): Normalized Relative Search Interest (RSI) Input Data: Native monthly RSI values from Google Trends (Jan 2004 - Jan 2025) for the query "knowledge management" + "knowledge management organizational". Processing: None. Utilizes the original base-100 normalized Google Trends index. Output Metric: Monthly Normalized RSI (Base 100). Frequency: Monthly. Google Books Ngram Viewer File (Prefix: GB_): Normalized Relative Frequency Input Data: Annual relative frequency values from Google Books Ngram Viewer (1950-2022, English corpus, no smoothing) for the query Knowledge Management + Intellectual Capital Management + Knowledge Transfer. Processing: Annual relative frequency series normalized (peak year = 100). Output Metric: Annual Normalized Relative Frequency Index (Base 100). Frequency: Annual. Crossref.org File (Prefix: CR_): Normalized Relative Publication Share Index Input Data: Absolute monthly publication counts matching KM-related keywords [("knowledge management" OR ...) AND (...) - see raw data for full query] in titles/abstracts (1950-2025), alongside total monthly Crossref publications. Deduplicated via DOIs. Processing: Monthly relative share calculated (KM Count / Total Count). Monthly relative share series normalized (peak month's share = 100). Output Metric: Monthly Normalized Relative Publication Share Index (Base 100). Frequency: Monthly. Bain & Co. Survey - Usability File (Prefix: BU_): Normalized Usability Index Input Data: Original usability percentages (%) from Bain surveys for specific years: Knowledge Management (1999, 2000, 2002, 2004, 2006, 2008, 2010). Note: Not reported after 2010. Processing: Normalization: Original usability percentages normalized relative to its historical peak (Max % = 100). Output Metric: Biennial Estimated Normalized Usability Index (Base 100 relative to historical peak). Frequency: Biennial (Approx.). Bain & Co. Survey - Satisfaction File (Prefix: BS_): Standardized Satisfaction Index Input Data: Original average satisfaction scores (1-5 scale) from Bain surveys for specific years: Knowledge Management (1999-2010). Note: Not reported after 2010. Processing: Standardization (Z-scores): Using Z = (X - 3.0) / 0.891609. Index Scale Transformation: Index = 50 + (Z * 22). Output Metric: Biennial Standardized Satisfaction Index (Center=50, Range?[1,100]). Frequency: Biennial (Approx.). File Naming Convention: Files generally follow the pattern: PREFIX_Tool_Processed.csv or similar, where the PREFIX indicates the data source (GT_, GB_, CR_, BU_, BS_). Consult the parent Dataverse description (Management Tool Comparative Indices) for general context and the methodological disclaimer. For original extraction details (specific keywords, URLs, etc.), refer to the corresponding KM dataset in the Raw Extracts Dataverse. Comprehensive project documentation provides full details on all processing steps.

  14. d

    liquidations both value normalized

    • dune.com
    Updated Sep 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mendifinance (2025). liquidations both value normalized [Dataset]. https://dune.com/discover/content/relevant?q=author:mendifinance&resource-type=queries
    Explore at:
    Dataset updated
    Sep 9, 2025
    Authors
    mendifinance
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Blockchain data query: liquidations both value normalized

  15. R

    Corporate Registry Data Normalization Market Research Report 2033

    • researchintelo.com
    csv, pdf, pptx
    Updated Oct 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Intelo (2025). Corporate Registry Data Normalization Market Research Report 2033 [Dataset]. https://researchintelo.com/report/corporate-registry-data-normalization-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Oct 1, 2025
    Dataset authored and provided by
    Research Intelo
    License

    https://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy

    Time period covered
    2024 - 2033
    Area covered
    Global
    Description

    Corporate Registry Data Normalization Market Outlook



    According to our latest research, the Global Corporate Registry Data Normalization market size was valued at $1.72 billion in 2024 and is projected to reach $5.36 billion by 2033, expanding at a CAGR of 13.2% during 2024–2033. One major factor driving the growth of this market globally is the escalating demand for accurate, real-time corporate data to support compliance, risk management, and operational efficiency across diverse sectors. As organizations increasingly digitize their operations, the need to standardize and normalize disparate registry data from multiple sources has become critical to ensure regulatory adherence, enable robust Know Your Customer (KYC) and Anti-Money Laundering (AML) processes, and foster seamless integration with internal and external systems. This trend is further amplified by the proliferation of cross-border business activities and the mounting complexity of global regulatory frameworks, making data normalization solutions indispensable for businesses seeking agility and resilience in a rapidly evolving digital landscape.



    Regional Outlook



    North America currently commands the largest share of the global Corporate Registry Data Normalization market, accounting for over 38% of the total market value in 2024. The region’s dominance is underpinned by its mature digital infrastructure, early adoption of advanced data management technologies, and stringent regulatory requirements that mandate comprehensive corporate transparency and compliance. Major economies such as the United States and Canada have witnessed significant investments in data normalization platforms, driven by the robust presence of multinational corporations, sophisticated financial institutions, and a dynamic legal environment. Additionally, the region benefits from a thriving ecosystem of technology vendors and solution providers, fostering continuous innovation and the rapid deployment of cutting-edge software and services. These factors collectively reinforce North America’s leadership position, making it a bellwether for global market trends and technological advancements in corporate registry data normalization.



    In contrast, the Asia Pacific region is emerging as the fastest-growing market, projected to register a remarkable CAGR of 16.7% during the forecast period. This accelerated expansion is fueled by rapid digital transformation initiatives, burgeoning fintech and legaltech sectors, and a rising emphasis on corporate governance across countries such as China, India, Singapore, and Australia. Governments in the region are actively promoting regulatory modernization and digital identity frameworks, which in turn drive the adoption of data normalization solutions to streamline compliance and mitigate operational risks. Furthermore, the influx of foreign direct investment and the proliferation of cross-border business transactions are compelling enterprises to invest in robust data management tools that can harmonize corporate information from disparate jurisdictions. These dynamics are creating fertile ground for solution providers and service vendors to expand their footprint and address the unique needs of Asia Pacific’s diverse and rapidly evolving corporate landscape.



    Meanwhile, emerging economies in Latin America, the Middle East, and Africa present a mixed outlook, characterized by growing awareness but slower adoption of corporate registry data normalization solutions. Challenges such as legacy IT infrastructure, fragmented regulatory environments, and limited access to advanced technology solutions continue to impede market penetration in these regions. However, a gradual shift is underway as governments and enterprises recognize the value of standardized corporate data for combating financial crime, fostering transparency, and attracting international investment. Localized demand is also being shaped by sector-specific needs, particularly in banking, government, and healthcare, where regulatory compliance and risk management are gaining prominence. Policy reforms and international collaborations are expected to play a pivotal role in accelerating adoption, though progress will likely be uneven across different countries and industry verticals.



    Report Scope



    Attri

  16. H

    Talent & Employee Engagement (Normalized)

    • dataverse.harvard.edu
    • search.dataone.org
    Updated May 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diomar Anez; Dimar Anez (2025). Talent & Employee Engagement (Normalized) [Dataset]. http://doi.org/10.7910/DVN/MOCGHM
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 6, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Diomar Anez; Dimar Anez
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset provides processed and normalized/standardized indices for the management tool group focused on 'Talent & Employee Engagement', including concepts like Employee Engagement Surveys/Systems and Corporate Codes of Ethics. Derived from five distinct raw data sources, these indices are specifically designed for comparative longitudinal analysis, enabling the examination of trends and relationships across different empirical domains (web search, literature, academic publishing, and executive adoption). The data presented here represent transformed versions of the original source data, aimed at achieving metric comparability. Users requiring the unprocessed source data should consult the corresponding Talent/Engagement dataset in the Management Tool Source Data (Raw Extracts) Dataverse. Data Files and Processing Methodologies: Google Trends File (Prefix: GT_): Normalized Relative Search Interest (RSI) Input Data: Native monthly RSI values from Google Trends (Jan 2004 - Jan 2025) for the query "corporate code of ethics" + "employee engagement" + "employee engagement management". Processing: None. Utilizes the original base-100 normalized Google Trends index. Output Metric: Monthly Normalized RSI (Base 100). Frequency: Monthly. Google Books Ngram Viewer File (Prefix: GB_): Normalized Relative Frequency Input Data: Annual relative frequency values from Google Books Ngram Viewer (1950-2022, English corpus, no smoothing) for the query Corporate Code of Ethics+Employee Engagement Programs+Employee Engagement Surveys+Employee Engagement. Processing: Annual relative frequency series normalized (peak year = 100). Output Metric: Annual Normalized Relative Frequency Index (Base 100). Frequency: Annual. Crossref.org File (Prefix: CR_): Normalized Relative Publication Share Index Input Data: Absolute monthly publication counts matching Engagement/Ethics-related keywords [("corporate code of ethics" OR ...) AND (...) - see raw data for full query] in titles/abstracts (1950-2025), alongside total monthly Crossref publications. Deduplicated via DOIs. Processing: Monthly relative share calculated (Engage/Ethics Count / Total Count). Monthly relative share series normalized (peak month's share = 100). Output Metric: Monthly Normalized Relative Publication Share Index (Base 100). Frequency: Monthly. Bain & Co. Survey - Usability File (Prefix: BU_): Normalized Usability Index Input Data: Original usability percentages (%) from Bain surveys for specific years: Corporate Code of Ethics (2002); Employee Engagement Surveys (2012, 2014); Employee Engagement Systems (2017, 2022). Processing: Semantic Grouping: Data points across related names treated as a single conceptual series representing Talent/Engagement focus. Normalization: Combined series normalized relative to its historical peak (Max % = 100). Output Metric: Biennial Estimated Normalized Usability Index (Base 100 relative to historical peak). Frequency: Biennial (Approx.). Bain & Co. Survey - Satisfaction File (Prefix: BS_): Standardized Satisfaction Index Input Data: Original average satisfaction scores (1-5 scale) from Bain surveys for specific years (same names/years as Usability). Processing: Semantic Grouping: Data points treated as a single conceptual series. Standardization (Z-scores): Using Z = (X - 3.0) / 0.891609. Index Scale Transformation: Index = 50 + (Z * 22). Output Metric: Biennial Standardized Satisfaction Index (Center=50, Range?[1,100]). Frequency: Biennial (Approx.). File Naming Convention: Files generally follow the pattern: PREFIX_Tool_Processed.csv or similar, where the PREFIX indicates the data source (GT_, GB_, CR_, BU_, BS_). Consult the parent Dataverse description (Management Tool Comparative Indices) for general context and the methodological disclaimer. For original extraction details (specific keywords, URLs, etc.), refer to the corresponding Talent/Engagement dataset in the Raw Extracts Dataverse. Comprehensive project documentation provides full details on all processing steps.

  17. Values of normalized Mutual Information.

    • figshare.com
    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ildefonso M. De la Fuente; Jesus M. Cortes (2023). Values of normalized Mutual Information. [Dataset]. http://doi.org/10.1371/journal.pone.0030162.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ildefonso M. De la Fuente; Jesus M. Cortes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Values of normalized Mutual Information.

  18. D

    EO BRDF Normalization Services Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). EO BRDF Normalization Services Market Research Report 2033 [Dataset]. https://dataintelo.com/report/eo-brdf-normalization-services-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    EO BRDF Normalization Services Market Outlook



    According to our latest research, the EO BRDF Normalization Services market size reached USD 654.2 million in 2024, with an observed compound annual growth rate (CAGR) of 13.7% from 2025 to 2033. This robust expansion is primarily attributed to the increasing demand for precise surface reflectance data across multiple industries. By 2033, the global EO BRDF Normalization Services market is projected to attain a value of USD 1,963.7 million, driven by advancements in Earth observation technologies, the proliferation of satellite and UAV platforms, and the growing need for accurate remote sensing applications. The market’s upward trajectory is further supported by the integration of AI and machine learning in data processing, which enhances the efficiency and accuracy of BRDF normalization workflows.




    The growth of the EO BRDF Normalization Services market is largely propelled by the expanding adoption of remote sensing technologies in environmental monitoring, precision agriculture, and defense intelligence. As governments and private organizations increasingly rely on Earth observation data to monitor land use, climate change, and resource management, the need for high-fidelity reflectance normalization becomes critical. This demand is further amplified by the shift towards data-driven decision-making, where accurate surface reflectance plays a pivotal role in generating actionable insights. The rising volume of satellite and UAV-generated imagery necessitates advanced BRDF normalization services to ensure data consistency, reliability, and comparability across different sensors and temporal datasets.




    Another significant growth factor is the rapid advancements in sensor technology and data processing algorithms. Innovations in hyperspectral and multispectral imaging, coupled with improved calibration and correction techniques, have substantially increased the quality and resolution of Earth observation data. The integration of artificial intelligence and machine learning in BRDF normalization processes has enabled service providers to automate complex workflows, reduce processing times, and enhance the accuracy of reflectance correction. As a result, industries such as agriculture, forestry, and environmental monitoring are increasingly leveraging these advanced services to optimize resource management, monitor crop health, and track environmental changes in near real-time.




    Furthermore, the EO BRDF Normalization Services market benefits from the growing emphasis on sustainability and regulatory compliance. Governments worldwide are implementing stringent environmental policies that require accurate monitoring and reporting of land, water, and atmospheric conditions. EO BRDF normalization plays a crucial role in ensuring the integrity of remote sensing data used for compliance reporting, impact assessments, and policy formulation. The commercial sector, particularly in precision agriculture and natural resource management, is also recognizing the value of normalized reflectance data in enhancing operational efficiency and reducing environmental footprints. This convergence of regulatory and commercial interests is expected to sustain the market’s growth momentum over the forecast period.




    Regionally, North America and Europe currently dominate the EO BRDF Normalization Services market, accounting for a combined share of over 60% in 2024. The presence of leading Earth observation agencies, advanced research institutions, and a robust commercial sector contribute to the high adoption rates in these regions. However, the Asia Pacific region is emerging as a key growth engine, driven by increased investments in satellite infrastructure, rising demand for precision agriculture, and expanding government initiatives in environmental monitoring. The Middle East & Africa and Latin America are also witnessing steady growth, supported by the deployment of new satellite platforms and the adoption of EO services for resource management and disaster response.



    Service Type Analysis



    The EO BRDF Normalization Services market is segmented by service type into data processing, calibration, correction, custom analysis, and others. Data processing remains the cornerstone of the market, as it encompasses the core activities required to transform raw satellite or UAV imagery into actionable reflectance data. The increasing complexity of Eart

  19. G

    Corporate Registry Data Normalization Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Corporate Registry Data Normalization Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/corporate-registry-data-normalization-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Corporate Registry Data Normalization Market Outlook



    According to our latest research, the global Corporate Registry Data Normalization market size reached USD 1.42 billion in 2024, driven by the increasing demand for standardized business information and regulatory compliance across industries. The market is experiencing robust expansion, with a Compound Annual Growth Rate (CAGR) of 13.8% anticipated over the forecast period. By 2033, the market is projected to attain a value of USD 4.24 billion, reflecting the growing importance of accurate, unified corporate registry data for operational efficiency, risk management, and digital transformation initiatives. This growth is primarily fueled by the rising complexity of business operations, stricter regulatory requirements, and the need for seamless data integration across diverse IT ecosystems.




    The primary growth factor in the Corporate Registry Data Normalization market is the accelerating pace of digital transformation across both private and public sectors. Organizations are increasingly reliant on accurate and standardized corporate data to drive business intelligence, enhance customer experiences, and comply with evolving regulatory frameworks. As enterprises expand globally, the complexity of maintaining consistent and high-quality data across various jurisdictions has intensified, necessitating advanced data normalization solutions. Furthermore, the proliferation of mergers and acquisitions, cross-border partnerships, and multi-jurisdictional operations has made data normalization a critical component for ensuring data integrity, reducing operational risks, and supporting agile business decisions. The integration of artificial intelligence and machine learning technologies into data normalization platforms is further amplifying the market’s growth by automating complex data cleansing, enrichment, and integration processes.




    Another significant driver for the Corporate Registry Data Normalization market is the increasing emphasis on regulatory compliance and risk mitigation. Industries such as BFSI, healthcare, and government are under mounting pressure to adhere to stringent data governance standards, anti-money laundering (AML) regulations, and Know Your Customer (KYC) requirements. Standardizing corporate registry data enables organizations to streamline compliance processes, conduct more effective due diligence, and reduce the risk of financial penalties or reputational damage. Additionally, the growing adoption of cloud-based solutions has made it easier for organizations to implement scalable, cost-effective data normalization tools, further propelling market growth. The shift towards cloud-native architectures is also enabling real-time data synchronization and collaboration, which are essential for organizations operating in dynamic, fast-paced environments.




    The increasing volume and variety of corporate data generated from digital channels, third-party sources, and internal systems are also contributing to the expansion of the Corporate Registry Data Normalization market. Enterprises are recognizing the value of leveraging normalized data to unlock advanced analytics, improve data-driven decision-making, and gain a competitive edge. The demand for data normalization is particularly strong among multinational corporations, financial institutions, and legal firms that manage vast repositories of entity data across multiple regions and regulatory environments. As organizations continue to invest in data quality initiatives and master data management (MDM) strategies, the adoption of sophisticated data normalization solutions is expected to accelerate, driving sustained market growth over the forecast period.




    From a regional perspective, North America currently dominates the Corporate Registry Data Normalization market, accounting for the largest share in 2024, followed closely by Europe and the rapidly growing Asia Pacific region. The strong presence of major technology providers, early adoption of advanced data management solutions, and stringent regulatory landscape in North America are key factors contributing to its leadership position. Meanwhile, Asia Pacific is projected to exhibit the highest CAGR during the forecast period, driven by the digitalization of government and commercial registries, expanding financial services sector, and increasing cross-border business activities. Latin America and the Middle East & Africa are also witnessing steady growth, supporte

  20. h

    Table 36

    • hepdata.net
    Updated Apr 8, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Table 36 [Dataset]. http://doi.org/10.17182/hepdata.72304.v1/t36
    Explore at:
    Dataset updated
    Apr 8, 2016
    Description

    Average $\rm v_{2}{{SP}}$ variation as a function of the self-normalized values of the $q_{2}^{V0C}$, centrality 5-10%.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Van Cura, Matthew; Lohmar, Jessica M.; Puel, Olivier; Myers, Ryan; Nepal, Binita; Calvo, Ana M.; Thompson, Brett (2019). LC-MS analysis data (average of normalized values ± standard error). [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000115298

LC-MS analysis data (average of normalized values ± standard error).

Explore at:
Dataset updated
Apr 26, 2019
Authors
Van Cura, Matthew; Lohmar, Jessica M.; Puel, Olivier; Myers, Ryan; Nepal, Binita; Calvo, Ana M.; Thompson, Brett
Description

LC-MS analysis data (average of normalized values ± standard error).

Search
Clear search
Close search
Google apps
Main menu