Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this paper, we show that concept of Statistical Process Control tools was thoroughly examined and the definitions of quality control concepts were presented. This is significant because of it is anticipated that this study will contribute to the literature as an exemplary application that demonstrates the role of statistical process control (SPC) tools in quality improvement in the evaluation and decision-making phase.
This is significant because of this study is to investigate applications of quality control, to clarify statistical control methods and problem-solving procedures, to generate proposals for problem-solving approaches, and to disseminate improvement studies in the ready-to-wear industry. The basic Statistical Process Control tools used in the study, the most repetitive faults were detected and these faults were divided into sub-headings for more detailed analysis. In this way, it was tried to prevent the repetition of faults by going down to the root causes of any detected fault. With this different perspective, it is expected that the study will contribute to other fields.
We give consent for the publication of identifiable details, which can include photograph(s) and case history and details within the text (“Material”) to be published in the Journal of Quality Technology. We confirm that have seen and been given the opportunity to read both the Material and the Article (as attached) to be published by Taylor & Francis.
Facebook
TwitterThe Best Management Practices Statistical Estimator (BMPSE) version 1.2.0 was developed by the U.S. Geological Survey (USGS), in cooperation with the Federal Highway Administration (FHWA) Office of Project Delivery and Environmental Review to provide planning-level information about the performance of structural best management practices for decision makers, planners, and highway engineers to assess and mitigate possible adverse effects of highway and urban runoff on the Nation's receiving waters (Granato 2013, 2014; Granato and others, 2021). The BMPSE was assembled by using a Microsoft Access® database application to facilitate calculation of BMP performance statistics. Granato (2014) developed quantitative methods to estimate values of the trapezoidal-distribution statistics, correlation coefficients, and the minimum irreducible concentration (MIC) from available data. Granato (2014) developed the BMPSE to hold and process data from the International Stormwater Best Management Practices Database (BMPDB, www.bmpdatabase.org). Version 1.0 of the BMPSE contained a subset of the data from the 2012 version of the BMPDB; the current version of the BMPSE (1.2.0) contains a subset of the data from the December 2019 version of the BMPDB. Selected data from the BMPDB were screened for import into the BMPSE in consultation with Jane Clary, the data manager for the BMPDB. Modifications included identifying water quality constituents, making measurement units consistent, identifying paired inflow and outflow values, and converting BMPDB water quality values set as half the detection limit back to the detection limit. Total polycyclic aromatic hydrocarbons (PAH) values were added to the BMPSE from BMPDB data; they were calculated from individual PAH measurements at sites with enough data to calculate totals. The BMPSE tool can sort and rank the data, calculate plotting positions, calculate initial estimates, and calculate potential correlations to facilitate the distribution-fitting process (Granato, 2014). For water-quality ratio analysis the BMPSE generates the input files and the list of filenames for each constituent within the Graphical User Interface (GUI). The BMPSE calculates the Spearman’s rho (ρ) and Kendall’s tau (τ) correlation coefficients with their respective 95-percent confidence limits and the probability that each correlation coefficient value is not significantly different from zero by using standard methods (Granato, 2014). If the 95-percent confidence limit values are of the same sign, then the correlation coefficient is statistically different from zero. For hydrograph extension, the BMPSE calculates ρ and τ between the inflow volume and the hydrograph-extension values (Granato, 2014). For volume reduction, the BMPSE calculates ρ and τ between the inflow volume and the ratio of outflow to inflow volumes (Granato, 2014). For water-quality treatment, the BMPSE calculates ρ and τ between the inflow concentrations and the ratio of outflow to inflow concentrations (Granato, 2014; 2020). The BMPSE also calculates ρ between the inflow and the outflow concentrations when a water-quality treatment analysis is done. The current version (1.2.0) of the BMPSE also has the option to calculate urban-runoff quality statistics from inflows to BMPs by using computer code developed for the Highway Runoff Database (Granato and Cazenas, 2009;Granato, 2019). Granato, G.E., 2013, Stochastic empirical loading and dilution model (SELDM) version 1.0.0: U.S. Geological Survey Techniques and Methods, book 4, chap. C3, 112 p., CD-ROM https://pubs.usgs.gov/tm/04/c03 Granato, G.E., 2014, Statistics for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater runoff best management practices (BMPs): U.S. Geological Survey Scientific Investigations Report 2014–5037, 37 p., http://dx.doi.org/10.3133/sir20145037. Granato, G.E., 2019, Highway-Runoff Database (HRDB) Version 1.1.0: U.S. Geological Survey data release, https://doi.org/10.5066/P94VL32J. Granato, G.E., and Cazenas, P.A., 2009, Highway-Runoff Database (HRDB Version 1.0)--A data warehouse and preprocessor for the stochastic empirical loading and dilution model: Washington, D.C., U.S. Department of Transportation, Federal Highway Administration, FHWA-HEP-09-004, 57 p. https://pubs.usgs.gov/sir/2009/5269/disc_content_100a_web/FHWA-HEP-09-004.pdf Granato, G.E., Spaetzel, A.B., and Medalie, L., 2021, Statistical methods for simulating structural stormwater runoff best management practices (BMPs) with the stochastic empirical loading and dilution model (SELDM): U.S. Geological Survey Scientific Investigations Report 2020–5136, 41 p., https://doi.org/10.3133/sir20205136
Facebook
TwitterBatch effects are technical sources of variation introduced by the necessity of conducting gene expression analyses on different dates due to the large number of biological samples in population-based studies. The aim of this study is to evaluate the performances of linear mixed models (LMM) and Combat in batch effect removal. We also assessed the utility of adding quality control samples in the study design as technical replicates. In order to do so, we simulated gene expression data by adding “treatment” and batch effects to a real gene expression dataset. The performances of LMM and Combat, with and without quality control samples, are assessed in terms of sensitivity and specificity while correcting for the batch effect using a wide range of effect sizes, statistical noise, sample sizes and level of balanced/unbalanced designs. The simulations showed small differences among LMM and Combat. LMM identifies stronger relationships between big effect sizes and gene expression than Combat, while Combat identifies in general more true and false positives than LMM. However, these small differences can still be relevant depending on the research goal. When any of these methods are applied, quality control samples did not reduce the batch effect, showing no added value for including them in the study design.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data for benchmarking SPC against other process monitoring methods. The data consist of a one-dimensional timeseries of floats (x.csv). Addititionally information whether the data are within the specifications are provided as another time series (y.csv). The data are generated by solving an optimization problem for each time to generate a mixture distribution of different probability distributions. Then for each timestep one record is sampled. Inputs for the optimization problem are the given probability distributions, the lower and upper limit of the tolerance interval as well as the desired median of the data. Additionally weights of the different probability distributions can be given as boundary condions for the different time steps. Metadata generated from the solving are stored in k_matrix.csv (wheights at each time step) and distribs (probability distribution objects). The data consists of phases with data from a stable mixture distribution and phases with data from a mixture distribution that do not fulfill the stability criteria.
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Statistical Process Control System (SPC) market has emerged as a critical component in quality management and process optimization across various industries, significantly enhancing operational efficiency and product quality. SPC utilizes statistical methods and tools to monitor and control manufacturing process
Facebook
TwitterWe present a new method for statistical process control (SPC) of a discrete part manufacturing system based on intrinsic geometrical properties of the parts, estimated from three-dimensional sensor data. An intrinsic method has the computational advantage of avoiding the difficult part registration problem, necessary in previous SPC approaches of three-dimensional geometrical data, but inadequate if noncontact sensors are used. The approach estimates the spectrum of the Laplace–Beltrami (LB) operator of the scanned parts and uses a multivariate nonparametric control chart for online process control. Our proposal brings SPC closer to computer vision and computer graphics methods aimed to detect large differences in shape (but not in size). However, the SPC problem differs in that small changes in either shape or size of the parts need to be detected, keeping a controllable false alarm rate and without completely filtering noise. An online or “Phase II” method and a scheme for starting up in the absence of prior data (“Phase I”) are presented. Comparison with earlier approaches that require registration shows the LB spectrum method to be more sensitive to rapidly detect small changes in shape and size, including the practical case when the sequence of part datasets is in the form of large, unequal size meshes. A post-alarm diagnostic method to investigate the location of defects on the surface of a part is also presented. While we focus in this article on surface (triangulation) data, the methods can also be applied to point cloud and voxel metrology data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper presents the importance of simple spatial statistics techniques applied in positional quality control of spatial data. To this end, Analysis methods of point data spatial distribution pattern are presented, as well as bias analysis in the positional discrepancies samples. To evaluate the points spatial distribution Nearest Neighbor and Ripley's K function methods were used. As for bias analysis, the average directional vectors of discrepancies and the circular variance were used. A methodology for positional quality control of spatial data is proposed, in which includes sampling planning and its spatial distribution pattern evaluation, analyzing the data normality through the application of bias tests, and positional accuracy classification according to a standard. For the practical experiment, an orthoimage generated from a PRISM scene of the ALOS satellite was evaluated. Results showed that the orthoimage is accurate on a scale of 1:25,000, being classified as Class A according to the Brazilian standard positional accuracy, not showing bias at the coordinates. The main contribution of this work is the incorporation of spatial statistics techniques in cartographic quality control.
Facebook
TwitterSurvey of advanced technology, use of quality management practices, by North American Industry Classification System (NAICS) and enterprise size for Canada and certain provinces, in 2014.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Initial Paper release: AUTOMATED, REPRODUCIBLE PIPELINE FOR LLM VULNERABILITY DISCOVERY: PROBE DESIGN, JSON FINDINGS, AND STATISTICAL QUALITY CONTROLS: CASE STUDY OF GPT-OSS-20B VULNERABILITIES Abstract
We present a unified, reproducible pipeline that combines systematic probe design, automated vulnerability discovery, indicator-based attribution, and an interpretable visualization suite to detect and triage prompt-induced vulnerabilities in open-weight large language models. Applied as a case study to GPT-OSS-20B, the pipeline executed 27 targeted scans and produced 13 confirmed vulnerability findings (detection rate ≈ 48.1%), with an average automated severity score of 0.81 on a 0–3 scale (max = 3). Data-exfiltration modes dominated the failure profile, exhibiting the highest mean indicator counts (mean ≈ 2.33) and strongest correlation with long, high-confidence responses. Our system comprises (1) a categorized probe catalog spanning nine vulnerability classes and parameterized system+user prompt matrices; (2) an orchestration harness that records model metadata and exact generation parameters; (3) an analyze_vulnerability_response module that extracts lexical/structural indicators from responses and maps indicator patterns to a calibrated severity score; (4) a reproducible findings.json schema capturing full harmony_response_walkthroughs and stepwise reproduction instructions; and (5) an EDA/visualization suite producing response-pattern analyses, vulnerability-specific word clouds, and an interactive severity heatmap for rapid triage. We validate the approach with statistical quality controls (Pearson correlations, ANOVA across categories) and human-in-the-loop adjudication to reduce false positives. Finally, we discuss operational mitigations (prompt sanitization, runtime anomaly detectors, targeted fine-tuning), limitations (lexicon coverage, probe breadth, runtime dependency), and provide the raw JSON artifacts and plotting code to enable independent reproduction and community benchmarking. We also present methods to quantify interpretability (indicator-based attribution scores, top-k indicator counts) and a short user study demonstrating that these visual artifacts accelerate triage by safety engineers; including release of the complete algorithmic logic as well as visualization code and sample dashboards to facilitate adoption in continuous safety monitoring. Keywords: LLM red-teaming; reproducible pipeline; GPT-OSS-20B; vulnerability discovery; data exfiltration; indicator lexicon; findings.json; interpretability; word clouds; severity heatmap; probe design; statistical quality control GitHub Repository: https://github.com/tobimichigan/Probe-Design-Case-Study-Of-Gpt-Oss-20b-Vulnerabilities/tree/main
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Datasets to the planned publication "Generalized Statistical Process Control via 1D-ResNet Pretraining" by Tobias Schulze, Louis Huebser, Sebastian Beckschulte and Robert H. Schmitt (Chair for Intelligence in Quality Sensing, Laboratory for Machine Tools and Production Engineering, WZL of RWTH Aachen University)
Data for benchmarking SPC against other process monitoring methods. The data consist of a one-dimensional timeseries of floats (x.csv). Addititionally information whether the data are within the specifications are provided as another time series (y.csv). The data are generated by solving an optimization problem for each time to generate a mixture distribution of different probability distributions. Then for each timestep one record is sampled. Inputs for the optimization problem are the given probability distributions, the lower and upper limit of the tolerance interval as well as the desired median of the data. Additionally weights of the different probability distributions can be given as boundary condions for the different time steps. Metadata generated from the solving are stored in k_matrix.csv (wheights at each time step) and distribs (probability distribution objects according to https://doi.org/10.5281/zenodo.8249487). The data consists of phases with data from a stable mixture distribution and phases with data from a mixture distribution that do not fulfill the stability criteria.
The train data were used to train the G-SPC model. The test data were used for benchmarking purposes
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2023 Internet of Production – 390621612.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Analyzing the physical-chemical and biological characteristics allows the evaluation of the water quality of a water body. Thus, the objective of this study was to determine the water quality index of the Passaúna and Piraquara rivers, as well as to apply statistical quality control methodologies to evaluate the data resulting from the monitoring of water quality. Therefore, a database with physico-chemical and microbiological parameters of the Passaúna and Piraquara rivers, a watershed of the Iguaçu River, belonging to the cities of Araucária and Piraquara, respectively Paraná, Brazil, was used to carry out the research. The water quality index was determined with the time series and, subsequently, these data were submitted to the statistical control of the process, with the control charts of individual Shewhart, EWMA and CUSUM, in addition to the development of the process capacity index. The WQI detected that the rivers remained in average quality until the year 2000, however, from that year it was possible to see a decreasing trend in the water quality of the evaluated rivers. The control charts of Shewhart, EWMA, CUSUM and the process capability index were able to identify the decreasing trend in water quality, demonstrating that they are fast and efficient techniques for the evaluation of water quality control.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Because of the “curse of dimensionality,” high-dimensional processes present challenges to traditional multivariate statistical process monitoring (SPM) techniques. In addition, the unknown underlying distribution of and complicated dependency among variables such as heteroscedasticity increase the uncertainty of estimated parameters and decrease the effectiveness of control charts. In addition, the requirement of sufficient reference samples limits the application of traditional charts in high-dimension, low-sample-size scenarios (small n, large p). More difficulties appear when detecting and diagnosing abnormal behaviors caused by a small set of variables (i.e., sparse changes). In this article, we propose two change-point–based control charts to detect sparse shifts in the mean vector of high-dimensional heteroscedastic processes. Our proposed methods can start monitoring when the number of observations is a lot smaller than the dimensionality. The simulation results show that the proposed methods are robust to nonnormality and heteroscedasticity. Two real data examples are used to illustrate the effectiveness of the proposed control charts in high-dimensional applications. The R codes are provided online.
Facebook
TwitterThis report describes the quality assurance arrangements for the registered provider (RP) Tenant Satisfaction Measures statistics, providing more detail on the regulatory and operational context for data collections which feed these statistics and the safeguards that aim to maximise data quality.
The statistics we publish are based on data collected directly from local authority registered provider (LARPs) and from private registered providers (PRPs) through the Tenant Satisfaction Measures (TSM) return. We use the data collected through these returns extensively as a source of administrative data. The United Kingdom Statistics Authority (UKSA) encourages public bodies to use administrative data for statistical purposes and, as such, we publish these data.
These data are first being published in 2024, following the first collection and publication of the TSM.
In February 2018, the UKSA published the Code of Practice for Statistics. This sets standards for organisations producing and publishing statistics, ensuring quality, trustworthiness and value.
These statistics are drawn from our TSM data collection and are being published for the first time in 2024 as official statistics in development.
Official statistics in development are official statistics that are undergoing development. Over the next year we will review these statistics and consider areas for improvement to guidance, validations, data processing and analysis. We will also seek user feedback with a view to improving these statistics to meet user needs and to explore issues of data quality and consistency.
Until September 2023, ‘official statistics in development’ were called ‘experimental statistics’. Further information can be found on the https://www.ons.gov.uk/methodology/methodologytopicsandstatisticalconcepts/guidetoofficialstatisticsindevelopment">Office for Statistics Regulation website.
We are keen to increase the understanding of the data, including the accuracy and reliability, and the value to users. Please https://forms.office.com/e/cetNnYkHfL">complete the form or email feedback, including suggestions for improvements or queries as to the source data or processing to enquiries@rsh.gov.uk.
We intend to publish these statistics in Autumn each year, with the data pre-announced in the release calendar.
All data and additional information (including a list of individuals (if any) with 24 hour pre-release access) are published on our statistics pages.
The data used in the production of these statistics are classed as administrative data. In 2015 the UKSA published a regulatory standard for the quality assurance of administrative data. As part of our compliance to the Code of Practice, and in the context of other statistics published by the UK Government and its agencies, we have determined that the statistics drawn from the TSMs are likely to be categorised as low-quality risk – medium public interest (with a requirement for basic/enhanced assurance).
The publication of these statistics can be considered as medium publi
Facebook
Twitterhttps://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The Statistical Process Control (SPC) Software market has increasingly become a cornerstone in quality management across various industries, including manufacturing, pharmaceuticals, and food processing. This specialized software leverages statistical methods to monitor and control production processes, ensuring tha
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Laboratory tests for technical evaluation or irrigation material testing involve the measurement of many variables, as well as monitoring and control of test conditions. This study, carried out in 2016, aimed at using statistical quality control techniques to evaluate results of dripper tests. Exponentially weighted moving average control charts were elaborated, besides capability indices for the measurement of the test pressure and water temperature; and study on repeatability and reproducibility (Gage RR) of flow measurement system using 10 replicates, in three work shifts (morning, afternoon and evening), with 25 emitters. Both the test pressure and water temperature remained stable, with “excellent” performance for the pressure adjustment process by integrative-derivative proportional controller. The variability between emitters was the component with highest contribution to the total variance of the flow measurements, with 96.77% of the total variance due to the variability between parts. The measurement system was classified as “acceptable” or “approved” by the Gage RR study; and non-random causes of significant variability were not identified in the routine of tests.
Facebook
TwitterA re-occurring problem of water quality monitoring programs is the determination of whether or not a given parameter is increasing, decreasing, or remaining constant over time. This is particularly critical when regulations stipulate that parameter concentrations or loads may not increase or decrease relative to some previously defined baseline period or standard. This document recommends the most viable statistical methods for addressing this situation.
Facebook
TwitterAdditional file 2. Ethics Statement Supplement. Contains details of MUSIC practices’ IRB status.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Replication dataset for "A quality improvement project using statistical process control methods for Type 2 diabetes control in a resource-limited setting" (doi: 10.1093/intqhc/mzx051).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Time series data for the statistic Dealing with construction permits: Quality control before construction index (0-1) (DB16-20 methodology) and country Switzerland.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this study, we conducted a simulation experiment to identify robust spatial interpolation methods using samples of seabed mud content in the Geoscience Australian Marine Samples database. Due to …Show full descriptionIn this study, we conducted a simulation experiment to identify robust spatial interpolation methods using samples of seabed mud content in the Geoscience Australian Marine Samples database. Due to data noise associated with the samples, criteria are developed and applied for data quality control. Five factors that affect the accuracy of spatial interpolation were considered: regions; statistical methods; sample densities; searching neighbourhoods; and sample stratification. Bathymetry, distance-to-coast and slope were used as secondary variables. Ten-fold cross-validation was used to assess the prediction accuracy measured using mean absolute error, root mean square error, relative mean absolute error (RMAE) and relative root mean square error. The effects of these factors on the prediction accuracy were analysed using generalised linear models. The prediction accuracy depends on the methods, sample density, sample stratification, search window size, data variation and the study region. No single method performed always superior in all scenarios. Three sub-methods were more accurate than the control (inverse distance squared) in the north and northeast regions respectively; and 12 sub-methods in the southwest region. A combined method, random forest and ordinary kriging (RKrf), is the most robust method based on the accuracy and the visual examination of prediction maps. This method is novel, with a relative mean absolute error (RMAE) up to 17% less than that of the control. The RMAE of the best method is 15% lower in two regions and 30% lower in the remaining region than that of the best methods in the previously published studies, further highlighting the robustness of the methods developed. The outcomes of this study can be applied to the modelling of a wide range of physical properties for improved marine biodiversity prediction. The limitations of this study are discussed. A number of suggestions are provided for further studies. You can also purchase hard copies of Geoscience Australia data and other products at http://www.ga.gov.au/products-services/how-to-order-products/sales-centre.html
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this paper, we show that concept of Statistical Process Control tools was thoroughly examined and the definitions of quality control concepts were presented. This is significant because of it is anticipated that this study will contribute to the literature as an exemplary application that demonstrates the role of statistical process control (SPC) tools in quality improvement in the evaluation and decision-making phase.
This is significant because of this study is to investigate applications of quality control, to clarify statistical control methods and problem-solving procedures, to generate proposals for problem-solving approaches, and to disseminate improvement studies in the ready-to-wear industry. The basic Statistical Process Control tools used in the study, the most repetitive faults were detected and these faults were divided into sub-headings for more detailed analysis. In this way, it was tried to prevent the repetition of faults by going down to the root causes of any detected fault. With this different perspective, it is expected that the study will contribute to other fields.
We give consent for the publication of identifiable details, which can include photograph(s) and case history and details within the text (“Material”) to be published in the Journal of Quality Technology. We confirm that have seen and been given the opportunity to read both the Material and the Article (as attached) to be published by Taylor & Francis.