In statistical evaluation of process effectiveness using statistics like capability or performance indices there are strong assumptions such as normality, homogeneity or independence. It can be problematic to check the assumptions for automated unsupervised data streams. Approaches are applied to standard data as well as data violating assumptions, like probability models. It has been shown that redefining or extending quality criteria can help to use standard quality tools meaningfully even in the case of serious departure from standard method assumptions. In structured data sources from different levels of production a need arises to aggregate quality metric statistics such as standard deviations and derived indices (cpk, ppk or probability measure dpmo). Normalization of heterogeneous data sources and aggregation techniques over time and process structure are investigated to achieve informative aggregated measures of quality and real world data examples are provided. A new scalable measure of the process improvement potential has been suggested: Quality Improvement Potential Factor (QIPF). Among the addressed problems are: interpreting high capability values, split-multistream, parallel and serial aggregation, univariate and multivariate process capability scaling.
Czarski A.: Assessment of a long-term and short-term process capability in the approach of analysis of variance (ANOVA), Metallurgy and Foundry Engineering, (2009) Vol. 35, no. 2, p. 111-119
Manuel R. Piña-Monarrez,Jesús F. Ortiz-Yañez,Manuel I. Rodríguez-Borbón: Non-normal Capability Indices for the Weibull and Lognormal Distributions, Quality and Reliability Engineering International, Volume32, Issue4, June 2016, Pages 1321-1329
Mohammad R. Niavarani, Rassoul Noorossana, Babak Abbasi: Three New Multivariate Process Capability Indices, Communications in Statistics - Theory and Methods, Volume 41, 2012 - Issue 2, Pages 341-356
|Keywords||process capability, aggregation, QIPF|