Facebook
TwitterSickle cell anemia (SCA) is a recessively inherited disease characterized by chronic hemolytic anemia, chronic inflammation, and acute episodes of hemolysis. Hydroxyurea (HU) is widely used to increase the levels of fetal hemoglobin (HbF). The objective of this study was to standardize and validate a method for the quantification of HU in human plasma by using ultra high performance liquid chromatography (UPLC) in order to determine the plasma HU levels in adult patients with SCA who had been treated with HU. We used an analytical reverse phase column (Nucleosil C18) with a mobile phase consisting of acetonitrile/water (16.7/83.3). The retention times of HU, urea, and methylurea were 6.7, 7.7, and 11.4 min, respectively. All parameters of the validation process were defined. To determine the precision and accuracy of quality controls, HU in plasma was used at concentrations of 100, 740, and 1600 µM, with methylurea as the internal standard. Linearity was assessed in the range of 50-1600 µM HU in plasma, obtaining a correlation coefficient of 0.99. The method was accurate and precise and can be used for the quantitative determination of HU for therapeutic monitoring of patients with SCA treated with HU.
Facebook
TwitterRaw data, software, standard operating procedure, and computer aided design files for the NIST-led publication "Results of an Interlaboratory Study on the Working Curve in Vat Photopolymerization II: Towards a Standardized Method"This record contains numerous supporting documents and data for "Results of an Interlaboratory Study on the Working Curve in Vat Photopolymerization II: Towards a Standardized Method".In the main .zip file, there are three subfolders and one document.The document is the Standard Operating Procedure (SOP) that was distributed to participants in this study. The SOP contains experimental details should one want to replicate the conditions of this study in their entirety.The first zip file is "CAD Files.zip", which contains two subfolders. The first is the fixtures printed by NIST for the interlaboratory study, and the second is commercial CAD files for the light source components used in this study. Each subfolder contains a readme describing each file.The second zip file is "Interlaboratory Study Raw Data.zip". This file contains separate files, designated by wavelength and participant number (matching Table 1 in the manuscript text), containing raw radiant exposure and cure depth pairs. The header of each file denotes the wavelength and identity of the light source (one of either Eldorado, Flagstaff, or SoBo). Six outlier data sets are included and their outlier status is denoted in the file name.The third zip file is "Other Working Curves.zip". This file contains separate files designated by wavelength and relate to the working curves in the manuscript that were collected on a commercial light source. The header for these files denotes whether or not the light source was filtered, the file names denote the wavelength. The 385 nm data sets also denote the irradiance used.The final zip file is "Labview Files.zip" and contains labview files used to calibrate and operate the light sources built for this study. This folder contains a readme file explaining the names and purposes of each file.NOTE: Trade names are provided only to specify the source of information and procedures adequately and do not imply endorsement by the National Institute of Standards and Technology. Similar products by other developers may be found to work as well or better.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundMethods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data.Methods and resultsTwo risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data.ConclusionsRisk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.
Facebook
TwitterData show measurements of total diameter, lumen diameter, and relative theoretical hydraulic conductivity, which were taken on vessel elements and wide-dband tracheids of two non-fibrous cacti species.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
fisheries management is generally based on age structure models. thus, fish ageing data are collected by experts who analyze and interpret calcified structures (scales, vertebrae, fin rays, otoliths, etc.) according to a visual process. the otolith, in the inner ear of the fish, is the most commonly used calcified structure because it is metabolically inert and historically one of the first proxies developed. it contains information throughout the whole life of the fish and provides age structure data for stock assessments of all commercial species. the traditional human reading method to determine age is very time-consuming. automated image analysis can be a low-cost alternative method, however, the first step is the transformation of routinely taken otolith images into standardized images within a database to apply machine learning techniques on the ageing data. otolith shape, resulting from the synthesis of genetic heritage and environmental effects, is a useful tool to identify stock units, therefore a database of standardized images could be used for this aim. using the routinely measured otolith data of plaice (pleuronectes platessa; linnaeus, 1758) and striped red mullet (mullus surmuletus; linnaeus, 1758) in the eastern english channel and north-east arctic cod (gadus morhua; linnaeus, 1758), a greyscale images matrix was generated from the raw images in different formats. contour detection was then applied to identify broken otoliths, the orientation of each otolith, and the number of otoliths per image. to finalize this standardization process, all images were resized and binarized. several mathematical morphology tools were developed from these new images to align and to orient the images, placing the otoliths in the same layout for each image. for this study, we used three databases from two different laboratories using three species (cod, plaice and striped red mullet). this method was approved to these three species and could be applied for others species for age determination and stock identification.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Behavioral data associated with the IBL paper: A standardized and reproducible method to measure decision-making in mice.This data set contains contains 3 million choices 101 mice across seven laboratories at six different research institutions in three countries obtained during a perceptual decision making task.When citing this data, please also cite the associated paper: https://doi.org/10.1101/2020.01.17.909838This data can also be accessed using DataJoint and web browser tools at data.internationalbrainlab.orgAdditionally, we provide a Binder hosted interactive Jupyter notebook showing how to access the data via the Open Neurophysiology Environment (ONE) interface in Python : https://mybinder.org/v2/gh/int-brain-lab/paper-behavior-binder/master?filepath=one_example.ipynbFor more information about the International Brain Laboratory please see our website: www.internationalbrainlab.comBeta Disclaimer. Please note that this is a beta version of the IBL dataset, which is still undergoing final quality checks. If you find any issues or inconsistencies in the data, please contact us at info+behavior@internationalbrainlab.org .
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Standardized data from Mobilise-D participants (YAR dataset) and pre-existing datasets (ICICLE, MSIPC2, Gait in Lab and real-life settings, MS project, UNISS-UNIGE) are provided in the shared folder, as an example of the procedures proposed in the publication "Mobility recorded by wearable devices and gold standards: the Mobilise-D procedure for data standardization" that is currently under review in Scientific data. Please refer to that publication for further information. Please cite that publication if using these data.
The code to standardize an example subject (for the ICICLE dataset) and to open the standardized Matlab files in other languages (Python, R) is available in github (https://github.com/luca-palmerini/Procedure-wearable-data-standardization-Mobilise-D).
Facebook
Twitterhttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Yearly citation counts for the publication titled "Multiplex cDNA quantification method that facilitates the standardization of gene expression data".
Facebook
Twitterhttps://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy
According to our latest research, the Global Mortgage Data Standardization market size was valued at $1.8 billion in 2024 and is projected to reach $5.1 billion by 2033, expanding at a robust CAGR of 12.3% during the forecast period of 2025–2033. One of the primary factors fueling this growth is the increasing regulatory scrutiny and compliance requirements across financial institutions, which has made standardized mortgage data essential for transparency, risk management, and operational efficiency. As the mortgage industry continues to digitize and expand globally, the demand for seamless, interoperable data frameworks is accelerating, enabling lenders, servicers, and regulators to achieve higher levels of accuracy, security, and speed in mortgage processing.
North America currently holds the largest share in the global Mortgage Data Standardization market, accounting for approximately 38% of the total market value in 2024. The region’s dominance is attributed to its mature financial ecosystem, rapid adoption of advanced technologies, and stringent regulatory mandates such as the Home Mortgage Disclosure Act (HMDA) and the Dodd-Frank Act. Major U.S. and Canadian banks have been early adopters of digital mortgage platforms and data standardization tools, driving significant investments in software, services, and platforms. The presence of leading technology vendors and a highly competitive lending environment further accelerates innovation and implementation of standardized data solutions. Additionally, North America benefits from a robust ecosystem of fintech startups and established players collaborating to streamline mortgage data processes, ensuring compliance and operational efficiency.
The Asia Pacific region is emerging as the fastest-growing market, with a projected CAGR of 15.2% from 2025 to 2033. This rapid growth is driven by increasing urbanization, rising home ownership rates, and significant investments in digital banking infrastructure across countries like China, India, and Australia. Governments and regulatory bodies in the region are actively promoting digital transformation in the financial sector, including the adoption of standardized mortgage data frameworks to enhance transparency and reduce fraud. Furthermore, the influx of global fintech companies and the expansion of local mortgage lenders are creating a fertile environment for innovative data standardization solutions. As regional players seek to improve customer experience and comply with evolving regulations, demand for cloud-based and automated mortgage data platforms is set to surge.
Emerging economies in Latin America, the Middle East, and Africa are witnessing gradual adoption of mortgage data standardization, albeit at a slower pace. These regions face unique challenges, such as fragmented regulatory frameworks, limited digital infrastructure, and varying levels of financial literacy. However, localized demand for affordable housing and government-led initiatives to modernize the mortgage sector are opening new opportunities for market entrants. In particular, pilot projects and partnerships with global technology providers are helping to bridge the gap, enabling financial institutions to experiment with scalable, standardized data solutions tailored to local market needs. Despite these advancements, widespread adoption remains constrained by budgetary limitations and the need for customized regulatory compliance frameworks.
| Attributes | Details |
| Report Title | Mortgage Data Standardization Market Research Report 2033 |
| By Component | Software, Services, Platforms |
| By Deployment Mode | On-Premises, Cloud-Based |
| By Application | Loan Origination, Loan Servicing, Risk Management, Compliance Management, Data Analytics, Others |
| B |
Facebook
Twitterhttps://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice
Master Data Management (MDM) Solutions Market Size 2024-2028
The master data management (mdm) solutions market size is forecast to increase by USD 20.29 billion, at a CAGR of 16.72% between 2023 and 2028.
Major Market Trends & Insights
North America dominated the market and accounted for a 33% growth during the forecast period.
By the Deployment - Cloud segment was valued at USD 7.18 billion in 2022
By the End-user - BFSI segment accounted for the largest market revenue share in 2022
Market Size & Forecast
Market Opportunities: USD 0 billion
Market Future Opportunities: USD 0 billion
CAGR : 16.72%
North America: Largest market in 2022
Market Summary
The market is witnessing significant growth as businesses grapple with the increasing volume and complexity of data. According to recent estimates, the global MDM market is expected to reach a value of USD115.7 billion by 2026, growing at a steady pace. This expansion is driven by the growing advances in natural language processing (NLP), machine learning (ML), and artificial intelligence (AI) technologies, which enable more effective data management and analysis. Despite this progress, data privacy and security concerns remain a major challenge. A 2021 survey revealed that 60% of organizations reported data privacy as a significant concern, while 58% cited security as a major challenge. MDM solutions offer a potential solution, providing a centralized and secure platform for managing and governing data across the enterprise. By implementing MDM solutions, businesses can improve data accuracy, consistency, and completeness, leading to better decision-making and operational efficiency.
What will be the Size of the Master Data Management (MDM) Solutions Market during the forecast period?
Explore market size, adoption trends, and growth potential for master data management (mdm) solutions market Request Free SampleThe market continues to evolve, driven by the increasing complexity of managing large and diverse data volumes. Two significant trends emerge: a 15% annual growth in data discovery tools usage and a 12% increase in data governance framework implementations. Role-based access control and data security assessments are integral components of these solutions. Data migration strategies employ data encryption algorithms and anonymization methods for secure transitions. Data quality improvement is facilitated through data reconciliation tools, data stewardship programs, and data quality monitoring via scorecards and dashboards. Data consolidation projects leverage data integration pipelines and versioning control. Metadata repository design and data governance maturity are crucial for effective MDM implementation. Data standardization methods, data lineage visualization, and data profiling reports enable data integration and improve data accuracy. Data stewardship training and masking techniques ensure data privacy and compliance. Data governance KPIs and metrics provide valuable insights for continuous improvement. Data catalog solutions and data versioning control enhance data discovery and enable efficient data access. Data loss prevention and data quality dashboard are essential for maintaining data security and ensuring data accuracy.
How is this Master Data Management (MDM) Solutions Industry segmented?
The master data management (mdm) solutions industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments. DeploymentCloudOn-premisesEnd-userBFSIHealthcareRetailOthersGeographyNorth AmericaUSCanadaEuropeGermanyUKAPACChinaRest of World (ROW)
By Deployment Insights
The cloud segment is estimated to witness significant growth during the forecast period.
Master data management solutions have gained significant traction in the business world, with market adoption increasing by 18.7% in the past year. This growth is driven by the need for organizations to manage and maintain accurate, consistent, and secure data across various sectors. Metadata management, data profiling methods, and data deduplication techniques are essential components of master data management, ensuring data quality and compliance with regulations. Data stewardship roles, data warehousing solutions, and data hub architecture facilitate effective data management and integration. Cloud-based master data management solutions, which account for 35.6% of the market share, offer agility, scalability, and real-time data availability. Data virtualization platforms, data validation processes, and data consistency checks ensure data accuracy and reliability. Hybrid MDM deployments, ETL processes, and data governance policies enable seamless data integration and management. Data security protocols, data qualit
Facebook
TwitterThe State Contract and Procurement Registration System (SCPRS) was established in 2003, as a centralized database of information on State contracts and purchases over $5000. eSCPRS represents the data captured in the State's eProcurement (eP) system, Bidsync, as of March 16, 2009. The data provided is an extract from that system for fiscal years 2012-2013, 2013-2014, and 2014-2015 Data Limitations: Some purchase orders have multiple UNSPSC numbers, however only first was used to identify the purchase order. Multiple UNSPSC numbers were included to provide additional data for a DGS special event however this affects the formatting of the file. The source system Bidsync is being deprecated and these issues will be resolved in the future as state systems transition to Fi$cal. Data Collection Methodology: The data collection process starts with a data file from eSCPRS that is scrubbed and standardized prior to being uploaded into a SQL Server database. There are four primary tables. The Supplier, Department and United Nations Standard Products and Services Code (UNSPSC) tables are reference tables. The Supplier and Department tables are updated and mapped to the appropriate numbering schema and naming conventions. The UNSPSC table is used to categorize line item information and requires no further manipulation. The Purchase Order table contains raw data that requires conversion to the correct data format and mapping to the corresponding data fields. A stacking method is applied to the table to eliminate blanks where needed. Extraneous characters are removed from fields. The four tables are joined together and queries are executed to update the final Purchase Order Dataset table. Once the scrubbing and standardization process is complete the data is then uploaded into the SQL Server database. Secondary/Related Resources: State Contract Manual (SCM) vol. 2 http://www.dgs.ca.gov/pd/Resources/publications/SCM2.aspx State Contract Manual (SCM) vol. 3 http://www.dgs.ca.gov/pd/Resources/publications/SCM3.aspx Buying Green http://www.dgs.ca.gov/buyinggreen/Home.aspx United Nations Standard Products and Services Code, http://www.unspsc.org/
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The State Contract and Procurement Registration System (SCPRS) was established in 2003, as a centralized database of information on State contracts and purchases over $5000. eSCPRS represents the data captured in the State's eProcurement (eP) system, Bidsync, as of March 16, 2009. The data provided is an extract from that system for fiscal years 2012-2013, 2013-2014, and 2014-2015 Data Limitations: Some purchase orders have multiple UNSPSC numbers, however only first was used to identify the purchase order. Multiple UNSPSC numbers were included to provide additional data for a DGS special event however this affects the formatting of the file. The source system Bidsync is being deprecated and these issues will be resolved in the future as state systems transition to Fi$cal. Data Collection Methodology: The data collection process starts with a data file from eSCPRS that is scrubbed and standardized prior to being uploaded into a SQL Server database. There are four primary tables. The Supplier, Department and United Nations Standard Products and Services Code (UNSPSC) tables are reference tables. The Supplier and Department tables are updated and mapped to the appropriate numbering schema and naming conventions. The UNSPSC table is used to categorize line item information and requires no further manipulation. The Purchase Order table contains raw data that requires conversion to the correct data format and mapping to the corresponding data fields. A stacking method is applied to the table to eliminate blanks where needed. Extraneous characters are removed from fields. The four tables are joined together and queries are executed to update the final Purchase Order Dataset table. Once the scrubbing and standardization process is complete the data is then uploaded into the SQL Server database. Secondary/Related Resources:
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset recreates three releases (2015, 2020, and 2022) of The Neighborhood Atlas team’s Area Deprivation Index (ADI) using standardized components. The ADI is a measure that aims to quantify the socioeconomic conditions of census block groups (sometimes used to approximate neighborhoods), originally based on 1990 census tract data and factor loadings. The Neighborhood Atlas team at the University of Wisconsin adapted the ADI to block groups and more recent data, imputing missing data using tract- and county-level data.However, unlike the original index construction method, The Neighborhood Atlas team did not adjust (standardize) individual components before combining them into an overall score. This approach resulted in individual index components measured in dollars, such as income and home value, being overly influential in the final score. This dataset corrects for that by standardizing these components before aggregating, offering a more multi-dimensional view of socioeconomic conditions. The standardized ADI dataset provides continuous rankings for block groups nationwide and decile rankings for block groups within each state.
Facebook
Twitter
According to our latest research, the global Mortgage Data Standardization market size reached USD 1.47 billion in 2024, reflecting robust adoption across financial institutions and regulatory bodies. The market is expected to expand at a CAGR of 13.2% from 2025 to 2033, reaching a projected value of USD 4.13 billion by 2033. This growth is primarily driven by the increasing demand for seamless data integration, regulatory compliance, and operational efficiency in mortgage processes worldwide.
One of the key growth factors propelling the Mortgage Data Standardization market is the surge in regulatory requirements and the intensification of compliance standards in the global mortgage sector. Financial institutions are under mounting pressure to ensure that their data management practices adhere to evolving government mandates, such as the Home Mortgage Disclosure Act (HMDA) in the United States and similar frameworks in Europe and Asia Pacific. These regulations necessitate the adoption of standardized data formats and reporting protocols, which enable more accurate, transparent, and efficient exchanges of mortgage information. As a result, mortgage lenders, banks, and other stakeholders are increasingly investing in advanced software, platforms, and services that facilitate mortgage data standardization, thereby minimizing compliance risks and reducing operational costs.
Another significant growth driver is the rapid digitization and automation of mortgage workflows. As the mortgage industry transitions from legacy systems to digital platforms, the need for standardized data becomes critical for interoperability and integration across various software applications. Mortgage data standardization enables seamless communication between loan origination, servicing, risk management, and analytics systems, thereby enhancing the overall customer experience and improving turnaround times. Furthermore, the proliferation of cloud-based solutions is accelerating this trend, as these platforms offer scalable, secure, and cost-effective means to manage standardized mortgage data across geographically dispersed operations.
Technological advancements in data analytics and artificial intelligence are also fueling the expansion of the Mortgage Data Standardization market. The integration of standardized data formats with advanced analytics tools empowers financial institutions to extract actionable insights, identify trends, and mitigate risks more effectively. By leveraging standardized mortgage data, organizations can enhance decision-making processes, improve loan quality, and optimize portfolio performance. This not only drives business growth but also fosters innovation in product offerings and service delivery, further strengthening the competitive landscape of the market.
From a regional perspective, North America continues to dominate the Mortgage Data Standardization market, accounting for the largest market share in 2024, followed by Europe and Asia Pacific. The United States, in particular, has witnessed significant investments in mortgage technology and regulatory compliance solutions, driven by stringent reporting requirements and a mature financial ecosystem. Meanwhile, emerging markets in Asia Pacific and Latin America are experiencing rapid growth, fueled by increasing mortgage penetration, government-led digitalization initiatives, and rising demand for efficient and transparent lending processes. As these regions continue to modernize their financial infrastructures, the adoption of mortgage data standardization solutions is expected to accelerate, contributing to the overall expansion of the global market.
The component segment of the Mortgage Data Standardization market is categorized into software, services, and platforms. Software solutions play a pivotal role in enabling financial institutions to standardize, validate, and manage mortgage data efficiently. These solutions encompass data integration tools, workflow automat
Facebook
TwitterThe regional networking strategy is widely implemented in China as a normative policy aimed at fostering cohesion and enhancing competitiveness. However, the empirical basis for this strategy remains relatively weak due to limitations in measurement methods and data availability. This paper establishes the urban networks by the enterprise investment data, and then accurately measures the network’s external effects of each city by the method of MGWR model. The results show that: (1) Regional networking plays a significant role in urban development, although it is not the dominant factor. (2) The benefits of network connections may vary depending on the location and level of cities. (3) The major cities assume a pivotal role in the urban network. Based upon the aforementioned research conclusions, this paper presents strategic measures to enhance the network’s external impacts, aiming to offer insights for other regions in formulating regional development strategies and establishing regional urban networks.
Facebook
TwitterThese are simulated data without any identifying information or informative birth-level covariates. We also standardize the pollution exposures on each week by subtracting off the median exposure amount on a given week and dividing by the interquartile range (IQR) (as in the actual application to the true NC birth records data). The dataset that we provide includes weekly average pregnancy exposures that have already been standardized in this way while the medians and IQRs are not given. This further protects identifiability of the spatial locations used in the analysis. This dataset is not publicly accessible because: EPA cannot release personally identifiable information regarding living individuals, according to the Privacy Act and the Freedom of Information Act (FOIA). This dataset contains information about human research subjects. Because there is potential to identify individual participants and disclose personal information, either alone or in combination with other datasets, individual level data are not appropriate to post for public access. Restricted access may be granted to authorized persons by contacting the party listed. It can be accessed through the following means: File format: R workspace file; “Simulated_Dataset.RData”. Metadata (including data dictionary) • y: Vector of binary responses (1: adverse outcome, 0: control) • x: Matrix of covariates; one row for each simulated individual • z: Matrix of standardized pollution exposures • n: Number of simulated individuals • m: Number of exposure time periods (e.g., weeks of pregnancy) • p: Number of columns in the covariate design matrix • alpha_true: Vector of “true” critical window locations/magnitudes (i.e., the ground truth that we want to estimate) Code Abstract We provide R statistical software code (“CWVS_LMC.txt”) to fit the linear model of coregionalization (LMC) version of the Critical Window Variable Selection (CWVS) method developed in the manuscript. We also provide R code (“Results_Summary.txt”) to summarize/plot the estimated critical windows and posterior marginal inclusion probabilities. Description “CWVS_LMC.txt”: This code is delivered to the user in the form of a .txt file that contains R statistical software code. Once the “Simulated_Dataset.RData” workspace has been loaded into R, the text in the file can be used to identify/estimate critical windows of susceptibility and posterior marginal inclusion probabilities. “Results_Summary.txt”: This code is also delivered to the user in the form of a .txt file that contains R statistical software code. Once the “CWVS_LMC.txt” code is applied to the simulated dataset and the program has completed, this code can be used to summarize and plot the identified/estimated critical windows and posterior marginal inclusion probabilities (similar to the plots shown in the manuscript). Optional Information (complete as necessary) Required R packages: • For running “CWVS_LMC.txt”: • msm: Sampling from the truncated normal distribution • mnormt: Sampling from the multivariate normal distribution • BayesLogit: Sampling from the Polya-Gamma distribution • For running “Results_Summary.txt”: • plotrix: Plotting the posterior means and credible intervals Instructions for Use Reproducibility (Mandatory) What can be reproduced: The data and code can be used to identify/estimate critical windows from one of the actual simulated datasets generated under setting E4 from the presented simulation study. How to use the information: • Load the “Simulated_Dataset.RData” workspace • Run the code contained in “CWVS_LMC.txt” • Once the “CWVS_LMC.txt” code is complete, run “Results_Summary.txt”. Format: Below is the replication procedure for the attached data set for the portion of the analyses using a simulated data set: Data The data used in the application section of the manuscript consist of geocoded birth records from the North Carolina State Center for Health Statistics, 2005-2008. In the simulation study section of the manuscript, we simulate synthetic data that closely match some of the key features of the birth certificate data while maintaining confidentiality of any actual pregnant women. Availability Due to the highly sensitive and identifying information contained in the birth certificate data (including latitude/longitude and address of residence at delivery), we are unable to make the data from the application section publically available. However, we will make one of the simulated datasets available for any reader interested in applying the method to realistic simulated birth records data. This will also allow the user to become familiar with the required inputs of the model, how the data should be structured, and what type of output is obtained. While we cannot provide the application data here, access to the North Carolina birth records can be requested through the North Carolina State Center for Health Statistics, and requires an appropriate data use agreement. Description Permissions: These are simulated data without any identifying information or informative birth-level covariates. We also standardize the pollution exposures on each week by subtracting off the median exposure amount on a given week and dividing by the interquartile range (IQR) (as in the actual application to the true NC birth records data). The dataset that we provide includes weekly average pregnancy exposures that have already been standardized in this way while the medians and IQRs are not given. This further protects identifiability of the spatial locations used in the analysis. This dataset is associated with the following publication: Warren, J., W. Kong, T. Luben, and H. Chang. Critical Window Variable Selection: Estimating the Impact of Air Pollution on Very Preterm Birth. Biostatistics. Oxford University Press, OXFORD, UK, 1-30, (2019).
Facebook
TwitterThe success and sustainability of U.S. EPA efforts to reduce, refine, and replace in vivo animal testing depends on the ability to translate toxicokinetic and toxicodynamic data from in vitro and in silico new approach methods (NAMs) to human-relevant exposures and health outcomes. Organotypic culture models employing primary human cells enable consideration of human health effects and inter-individual variability, but present significant challenges for test method standardization, transferability, and validation. Increasing confidence in the information provided by these in vitro NAMs requires setting appropriate performance standards and benchmarks, defined by the context of use, to consider human biology and mechanistic relevance without animal data. The human thyroid microtissue assay utilizes primary human thyrocytes to reproduce structural and functional features of the thyroid gland that enable testing for potential thyroid disrupting chemicals. As a variable-donor assay platform, conventional principles for assay performance standardization need to be balanced with the ability to predict a range of human responses. The objectives of this study were to 1) define the technical parameters for optimal donor procurement, primary thyrocyte qualification, and performance in the human thyroid microtissue assay, and 2) set benchmark ranges for reference chemical responses. Thyrocytes derived from a cohort of 32 demographically-diverse euthyroid donors were characterized across a battery of endpoints to evaluate morphological and functional variability. Reference chemical responses were profiled to evaluate the range and chemical-specific variability of donor-dependent effects within the cohort. The data informed minimum acceptance criteria for donor qualification and set benchmark parameters for method transfer proficiency testing and validation of assay performance.
Facebook
Twitter
According to our latest research, the global mortgage data tapes standardization market size reached USD 1.47 billion in 2024, with a robust year-over-year growth driven by the increasing digitization of financial services and regulatory requirements. The market is forecasted to expand at a CAGR of 11.2% from 2025 to 2033, reaching a projected value of USD 4.13 billion by 2033. This growth trajectory is primarily fueled by the demand for enhanced data integrity, operational efficiency, and compliance in the mortgage industry, as organizations strive to streamline data management and reporting processes.
One of the most significant growth factors for the mortgage data tapes standardization market is the rapid adoption of digital technologies across the financial sector. As mortgage processing becomes increasingly digitized, the need for standardized data tapes that enable seamless integration, transfer, and analysis of mortgage-related information has become paramount. Financial institutions are under mounting pressure to process loans faster and more accurately, making standardized data tapes an essential tool for reducing manual intervention and errors. Furthermore, the shift toward digital mortgage solutions has heightened the importance of data quality and consistency, which directly drives the adoption of standardization platforms and services across the industry.
Another critical factor propelling the market is the evolving regulatory landscape. Regulatory bodies across the globe are mandating stricter compliance and reporting standards for mortgage transactions, requiring more granular and standardized data submission. This is particularly evident in regions such as North America and Europe, where regulatory frameworks like the Consumer Financial Protection Bureau (CFPB) and the European Banking Authority (EBA) have introduced comprehensive guidelines for mortgage data reporting. As a result, banks, lenders, and other financial entities are investing heavily in solutions that automate and standardize data tapes to ensure compliance, minimize risk, and avoid costly penalties. The increased focus on transparency and auditability has further cemented the role of data standardization in the mortgage market.
The growing complexity of mortgage products and the rise of securitization have also played a pivotal role in driving the demand for mortgage data tapes standardization. Securitization processes require the aggregation and analysis of vast amounts of mortgage data from diverse sources, making data uniformity crucial for accurate risk assessment and investor confidence. Standardized data tapes facilitate the efficient packaging, transfer, and analysis of mortgage assets, thereby enabling smoother securitization workflows and secondary market transactions. This trend is particularly pronounced in large financial institutions and government agencies that manage extensive mortgage portfolios and require robust data management solutions to support their operations.
From a regional perspective, North America continues to dominate the mortgage data tapes standardization market, accounting for the largest revenue share in 2024. This leadership is attributed to the region's advanced financial infrastructure, high adoption of digital mortgage solutions, and stringent regulatory requirements. Europe follows closely, driven by the ongoing harmonization of financial regulations and the increasing emphasis on cross-border mortgage transactions. Meanwhile, the Asia Pacific region is emerging as a high-growth market, bolstered by rapid urbanization, expanding mortgage markets, and increasing investments in digital banking infrastructure. Latin America and the Middle East & Africa are also witnessing steady growth, albeit at a slower pace, as financial institutions in these regions gradually embrace data standardization to enhance operational efficiency and regulatory compliance.
The mortgag
Facebook
Twitterhttps://dataverse.nl/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34894/QZTMV4https://dataverse.nl/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34894/QZTMV4
Data set used in "A standardized method for the construction of tracer specific PET and SPECT rat brain templates: validation and implementation of a toolbox"
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
The advancement of metabarcoding techniques, declining costs of high-throughput sequencing and development of systematic sampling devices, such as autonomous reef monitoring structures (ARMS), have provided the means to gather a vast amount of diversity data from cryptic marine communities. However, such increased capability could also lead to analytical challenges if the methods used to examine these communities across local and global scales are not standardized. Here we compare and assess the underlying biases of four ARMS field processing methods, preservation media, and current bioinformatic pipelines in evaluating diversity from cytochrome c oxidase I metabarcoding data. Illustrating the ability of ARMS-based metabarcoding to capture a wide spectrum of biodiversity, 3,372 OTUs and twenty-eight phyla, including 17 of 33 marine metazoan phyla, were detected from 3 ARMS (2.607 m2 area) collected on coral reefs in Mo'orea, French Polynesia. Significant differences were found between processing and preservation methods, demonstrating the need to standardize methods for biodiversity comparisons. We recommend the use of a standardized protocol (NOAA method) combined with DMSO preservation of tissues for sessile macroorganisms because it gave a more accurate representation of the underlying communities, is cost effective and removes chemical restrictions associated with sample transportation. We found that sequences identified at ? 97% similarity increased more than 7-fold (5.1% to 38.6%) using a geographically local barcode inventory, highlighting the importance of local species inventories. Phylogenetic approaches that assign higher taxonomic ranks accrued phylum identification errors (9.7%) due to sparse taxonomic coverage of the understudied cryptic coral reef community in public databases. However, a ? 85% sequence identity cut-off provided more accurate results (0.7% errors) and enabled phylum level identifications of 86.3% of the sequence reads. With over 1600 ARMS deployed, standardizing methods and improving databases are imperative to provide unprecedented global baseline assessments of understudied cryptic marine species in a rapidly changing world.
Facebook
TwitterSickle cell anemia (SCA) is a recessively inherited disease characterized by chronic hemolytic anemia, chronic inflammation, and acute episodes of hemolysis. Hydroxyurea (HU) is widely used to increase the levels of fetal hemoglobin (HbF). The objective of this study was to standardize and validate a method for the quantification of HU in human plasma by using ultra high performance liquid chromatography (UPLC) in order to determine the plasma HU levels in adult patients with SCA who had been treated with HU. We used an analytical reverse phase column (Nucleosil C18) with a mobile phase consisting of acetonitrile/water (16.7/83.3). The retention times of HU, urea, and methylurea were 6.7, 7.7, and 11.4 min, respectively. All parameters of the validation process were defined. To determine the precision and accuracy of quality controls, HU in plasma was used at concentrations of 100, 740, and 1600 µM, with methylurea as the internal standard. Linearity was assessed in the range of 50-1600 µM HU in plasma, obtaining a correlation coefficient of 0.99. The method was accurate and precise and can be used for the quantitative determination of HU for therapeutic monitoring of patients with SCA treated with HU.