Statistical process control: How to quantify product quality

Statistical process control must consider short- and long-term variations in analysis as well as equipment, raw materials and operation

By Greg McMillan and Stan Weiner

1 of 3 < 1 | 2 | 3 View on one page

Greg: When all is said and done with plant performance metrics, it comes down to how well the product meets customer quality specifications. As with all things in manufacturing, you can only control what you can measure. Measuring product quality poses challenges because observed deviations can occur on different time scales due to batch and other sequential operations, operator actions, control loop performance, equipment performance, sample techniques, at-line analyzer errors, lab data entry mistakes, and changes in raw materials and ambient conditions.

Stan: Knowing the time scale of variability is a critical step in being able to understand the source of the variability and to know the process capability and performance. Fortunately, we have Richard Miller, a retired Monsanto-Solutia Fellow like Greg and me, dedicated to improving process performance, except Ric's expertise is statistical process control. Ric continues to advance our understanding and ability to measure product quality as a senior quality engineer at Ascend Performance Materials. Ric, what are the two primary statistical metrics that you use to control Ascend's many processes?

Ric: We compute the Cpk metric to give us the "capability of the process" and the Ppk metric to tell us "product performance," both essential to understanding the natural voice of the process. For a process with min/max specifications, both Cpk and Ppk metrics use the minimum of the difference between the process population's mean (μ) and "max spec" and "min spec" in the numerator of their equations on slide 1 of the online "Understanding PpK". The key distinction between the metrics Cpk and Ppk is their dependence upon the short-term sigma (σST) and the long-term sigma (σLT), respectively, both of which include the additive effect of the measurement sigma (σm). Note that the term sigma is typically used for population variability whereas the equivalent term, standard deviation, is most often used for the sample or measurement variability where about 68% of the measured values fall within plus and minus one standard deviation about the mean of a normal distribution. Long-term sigma is the conventional standard deviation of a population of samples collected on some regular frequency, while short term sigma is calculated from their absolute two-point moving range.

Greg: How do we get the measurement sigma?

Ric: I have found a relatively quick and easy way is to divide a plant sample into thirds and send each of them to the lab blindly. In other words, the two backup samples are held and sent to the lab at different times. This is repeated five times over a two-week period, hopefully using different lab technicians and lab pieces of equipment. The square root of the pooled variances of the triad results is the standard deviation I refer to as the measurement sigma. Using routine plant rather than special samples enables us to test the lab under commercial operation rather than special lab conditions (see reference 1 for more information on our "5/3" testing for measurement system validation).

Stan: Could you do something similar for at-line analyzers?

Ric: Since an analyzer is probably set up for calibration by the introduction of samples, you could use a plant sample rather than a standard sample and follow a similar procedure to get the measurement sigma for an at-line analyzer.

Greg: What is the importance of the measurement sigma?

Ric: The measurement sigma becomes increasingly important as it approaches size of the short-term and long-term sigma. Even if it is initially negligible, as you improve the process capability and process performance through reduction of the short-term and long-term sigma respectively, the measurement sigma becomes more of an issue. Thus, part of process control improvement involves improving analyzer technology, sample preparation and handling, and automated data entry. The lab result is taken as correct, so improvement of lab procedures and data entry must be done upfront. Significant measurement variability in the lab has occasionally been traced to a mistake made in manual data entry. Thus, lab analyzers that offer the capability of automatically storing results in the plant data historian offer advantages in terms of more accurate and accessible data. Of course, these analyzers can generate data more frequently and provide results immediately, rather than waiting on lab results.

1 of 3 < 1 | 2 | 3 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments