Stan: This month, we continue the thread we started last month—sharing the experience gained over the 38-year career of Lewis Gordon, a principal control systems engineer retired from Invensys. Last month, we focused on tuning and loop performance monitoring software. Here, we explore how to approach estimating and proving the benefits from applying advanced process control (APC) techniques such as model predictive control (MPC).
Greg: In previous columns and blogs, I advocated the use of a metric to quantify improvements in process efficiency and capacity, preferably in dollars, immediately before and after basic process control improvements such as better field measurements and valves, control strategies and feed-forward control.
Stan: Considering that an MPC can be switched between automatic and manual to show "APC on" and "APC off" performance on demand, we asked Lew, "How do you quantify the economic benefits of an APC project?"
Lew: The previous column listed the three basic metrics for performance improvement— production rate and value, energy consumption per unit of product and yield per unit of feed. Comparing "before" and "after" values for these metrics is the usual approach. Still, many things can change over the course of a project that will cloud the results. Changes in process equipment and characteristics, product specifications and the costs of energy and/or feed will generate changes in these metrics that are unrelated to the project implementation.
So although it's more difficult, expensive and time-consuming, the only truly fair way is to compare averages for these metrics from APC-on and APC-off periods at the end of a project.
Many random things happen in the plant every day that affect control system performance. So the only accurate way to get a good comparison is to expose the "new" and "old" control systems to these random events across a series of APC-on and APC-off periods. The list of such influences is long. Variations in production rate, raw materials properties, fuel characteristics, operator influences, ambient conditions, product demand and quality specifications, upstream and downstream operations, mechanical factors such as equipment modifications, process factors such as fouling and changes in catalyst activity, and field automation system issues such as plugging, sensor coating and valve wear will make themselves felt.
The net effect of truly random influences will present itself as a normal distribution of these metrics, calculated for a series of APC-on and APC-off periods. Where the observed distribution is not normal, in the statistical sense, there are specific reasons that need to be identified. For example, averages and medians may be forced apart by nonlinearities as the operating point moves closer to an optimum, forcing the statistical distribution to be asymmetric. In such a case, the standard deviation on the side closest to the optimum is more important.
The rules of statistical analysis and the characteristics of plant dynamics and disturbances define the random tests needed. All "normal" variations can be captured by random on-off testing, with on and off times, sample interval and transition times all based on the plant and MPC response times and known disturbance periods.
The total on and off test time must be long enough to include all combinations of significant variations. The shortest individual test time should be three times the longest settling time. The longest individual test time should be the longest test time plus the longest disturbance interval. Disturbances at known periods and times can be used to set the on and off test boundary (test start and end times).
Finally, data collected during the transition between systems should not be used for calculating performance metrics, as this data will always bias the results in favor of the poorer performing configuration. The minimum transition time should be greater than the larger of the longest disturbance settling time or the optimizer settling time.
Abnormal operation and day-to-night variations, instrument failures, manual control and the like should be removed from the data prior to statistical analysis.
Greg: Dennis Cima alluded to these requirements for documenting MPC performance in Control Talk, July 2013, "The Route to Model Predictive Control Success." Dennis emphasized the need to screen the data and remove outliers before benefits are reported. What are your guidelines on sampling for the statistical analysis of benefits?
Lew: Data sampling intervals should be less than half the shortest process variable period of oscillation to avoid aliasing. Set the sampling interval to be less than 10% of the dominant time constant, but not so short as to simply be capturing noise.
Stan: How do you use results reporting to inspire people, rather than make them perspire?
Lew: Operations should be credited publicly when things go well, but never criticized publicly when things go wrong. You never want to make people look bad in front of their peers. Unfairly preferential praise and criticism both create resentment. The goal is to create team spirit and a common desire to excel.