Greg: There is an incredible gap between what could be and what is actually done in terms of optimizing processes by better use of control technologies. There are many reasons, foremost being the fact that the dynamic models are dynamically changing. Process gains and valve gains change with operating point and conditions. The process, valve and sensor dead times and time constants not only change with operating point and conditions, and but also, in some cases, with direction (e.g., heating versus cooling and pH up or down). All of these dynamics can change with step size. Fortunately, there are very few true steps in actual process operation. What we see if we look more closely is more like a ramp due to valve response time and PID integral action. There is an innovative paradigm shift that addresses these fundamental challenges by a new control algorithm developed and patented by Allan G. Kern, owner and president, APC Performance, LLC.
The core technology is Rate-Predictive Control (RPC), where the “prediction value” is the current value of the Indirect Control Variable (ICV) plus its ramp rate (ongoing rate-of-change) multiplied by the 63% response time (total dead time plus primary time constant in a first-order approximation). As the prediction value is seen to reach its target, further moves of the Direct Control Variable (DCV) are tapered and stopped, such that the ICV settles exactly on its target. Each DCV moves at a preselected “move rate.” As we will see, the resulting algorithm is inherently adaptive to changes in the process gain and move rate, and is robust (relatively insensitive) to changes in process response time. A person implementing RPC only needs to know gain direction of the process interaction, i.e. direct or reverse control action, so a detailed model is not required.
The ICV is similar to a PID process variable or MPC controlled variable or constraint variable. The DCV is comparable to a PID output or MPC manipulated variable or “handle.” DCVs are directly controlled, in order to indirectly control the ICVs, hence the terminology. The XMC multivariable version uses RPC as its internal control mechanism, and a matrix where each ICV can be affected by multiple DCVs and vice-versa. Allan describes RPC and XMC as having “operational handles” on the process via the DCV move rate with “eyes on” the process via the ICV prediction value. The RPC target is like the PID setpoint.
Stan: How do you choose the RPC move rate?
[sidebar id =1]
Allan: In any APC effort, I like to involve the three key players, who are the process engineer, operations representative and control engineer—in that order. The process engineer has a focus every day on process reliability and optimization, and is usually the most reliable source for input about automation objectives. Operations representatives are experienced operators or supervisors who bring additional detailed insight into process behavior and real constraints. The control engineer, as I see it, is responsible for automation implementation according to the criteria laid out by the process engineer and operating team, brings control system know-how, and understands limitations in transmitters, valves and closed-loop control capabilities.
The RPC move rate can be chosen intuitively. It’s conceptually similar to a process “speed limit”—operators usually know from experience appropriate DCV move rates, just as drivers know appropriate speed limits. Speed limits aren’t based on how far you’ve got to go or how soon you’d like to get there, but on keeping things safely under control along the way.
Move rate also can be determined objectively. For example, if the operator normally moves a temperature two degrees at a time, while waiting 10 minutes between moves to gauge the response, then the corresponding RPC move rate would be 0.2 degrees per minute. For some loops, allowable move rates are spelled out explicitly in operating procedures.
Greg: How is RPC inherently adaptive?
Allan: If the process gain doubles, the ICV response doubles, and the prediction value arrives twice as soon at the target, so that DCV moves stop in half the time. In this way, the control response is based on the actual, real-time process response, not on some prior expectation (model) of process response. This is what is meant by “inherently adaptive” or “naturally self-tuning.”
The same way RPC is adaptive to process gain, it’s also adaptive to changes in the move rate. This means the move rate can be freely adjusted for operational performance, without undermining control performance. This has huge implications for those of us who have spent large parts of our careers trying to balance these competing priorities.
Greg: My June 28, 2012 Control Talk Blog, “Future PV values are the future;” September, 2012 Control article, “Get the most out of your batch;” and May, 2006 Control article, “Full throttle batch and startup response” show how I use the rate of change of a process variable (PV) multiplied by the dead time to predict where the PV will be.
Stan: What are the tuning parameters for RPC and XMC?
Allan: I think RPC tuning ultimately is easier than PID tuning.
RPC has three main tuning parameters: move rate, prediction time and “control band.” All three can be set either intuitively or by simple objective methods. Setting move rate was already discussed. Prediction time, ideally, is the 63% response time that is readily seen in the time offset of the DCV and ICV ramps. It can also be set as the typical time between setpoint moves—how long does an operator wait to see the effect of one move before making another? The third RPC tuning parameter, control band, is a kind of proportional band. RPC’s internal move rate is reduced proportionately (tapered) as the error becomes less than the control band, so the internal move rate goes to zero as the error goes to zero. Typical control band values range from 2 to 10 (in ICV units), and intuitively can be thought of as the point where operators begin to reduce step size to manage overshoot and settle on target. Where there is significant dead time, there is an RPC dead time rule that can be applied to verify sufficient band to avoid overshoot.
Each instance of RPC has these three tuning parameters. For an XMC application, each DCV has a move rate, and each ICV has a prediction time and a control band.
Greg: What happens if the ICV stops approaching setpoint, which can occur due resolution limits in the automation system and increasing loads?
Allan: If the prediction value equals the current value, i.e. the ICV is not moving, and the error is non-zero, the DCV will continue to move according to its move rate to bring the ICV to the target. The DCV continues to move according to the error, gain direction and speed limit, even if ICV movement temporarily pauses or otherwise behaves non-ideally, such as an inverse response.
It’s worth mentioning that, although RPC uses a preset move rate, it’s not limited to a single choice. The move rate is often dynamically adjusted based on ongoing conditions and performance objectives. For example, a constraint limit violation versus an optimization move, or (as already described) RPC continuously adjusts the move rate within the control band. This adds a lot of control flexibility and power, but doesn’t create complications, because RPC is adaptive to move rate at the same time.
Stan: How is feedforward implemented?
Allan: A feedforward signal is added to the DCV similar to how a feedforward summer is implemented in conventional PID controllers. Dynamic compensation of the feedforward signal is done externally. As with any feedforward signal, it needs to be robust and reliable, especially with regard to dynamics (timing) more so than gain.
I call this a “classic,” “selective” or “ARC” (advanced regulatory control) style approach to feedforward, as opposed to the wholesale—in my view, often reckless—approach used in conventional MPC.
Stan: What about unmeasured load disturbances?
Allan: Load disturbances are seen by RPC as changes in ICV ramp rate and prediction value, and are controlled in the same way as setpoint changes—the load response is the same as the setpoint response. A University of Wyoming research project showed that the RPC transfer function is identical for setpoint changes and load disturbances. RPC already has operational performance, so there is not much incentive to add functionality to treat setpoint changes differently from load disturbances.
RPC is responsive to load disturbances, since it acts not only on the magnitude of the error, but on its manifest future value based on its momentum, i.e. the prediction value. By the same token, as control returns to setpoint, RPC is stable, thanks to the same rate-predictive control action.
Greg: PID setpoint response is smooth without overshoot of the process variable when a PID is tuned to give aggressive disturbance rejection if a setpoint filter is set equal to the reset time, or a PID structure of proportional and derivative action on process variable rather than error is used. For integrating or runaway processes, PID output overshoot of its final resting value is needed to get the process to its target. For runaway processes that are open or unstable (e.g., highly exothermic reactors), the control action must be very aggressive for stability and safety. How does RPC address these requirements?
Allan: RPC by default is tuned for operational performance, since it was developed in part to address this need. Operational performance means several things, including preset move rates and minimal overshoot. Operating teams generally prefer this type of performance, because rapid movement and overshoot can result in trips or alarms, and oscillation can cause or mask process instability. But RPC also can be tuned for aggressive performance in applications where it is operationally or economically desirable, or where it is required by the nature of the process, as in the examples you mention.
In general, for aggressive response, RPC is tuned with a faster move rate, larger control band, and (especially) a shorter prediction time (less than actual process response time). For example, in a recent test, a prediction time of one-half the actual process response time resulted in a damped 25% DCV overshoot but no overshoot of the ICV. With a prediction time of one-quarter the actual process response time, ICV overshoot began to appear. Just as move rate is the main tuning parameter for operational performance, prediction time is the main tuning parameter for aggressive performance.
Stan: How do you deal with deadtime-dominant processes?
Allan: RPC, like PID, is limited in what it can do for dead time-dominant processes, but RPC has another novel feature that combines rate-predictive control with a periodic “wait and see” dead time control cycle, wherein the wait period can be skipped based on the RPC dead time tuning rule, so that time is not lost unnecessarily when the error is outside the control band, for example, on a large setpoint change. Within the band, there may be sudden starts and stops of DCV moves with a periodic control method, but that is often the nature of control of dead time-dominant processes.
Note that small to moderate amounts of dead time, up to about 10-20% of response time, which are typical of many processes simply due to measurement or valve delay, are compensated by the control band parameter, which prevents “over-driving the headlights” in these situations, without resorting to periodic control.
Greg: How do you compute the ICV rate of change and how do you deal with noise?
Allan: The noise question comes up often, as people think about the prediction calculation, but RPC is not really any more complicated with regard to noise than PID. So far, there has not been any need for special filtering guidelines, beyond well-known industry practice. In addition to filtering, RPC typically maintains an internal array of past values, and the user can select how old a value to use in the rate calculation, so that is another way to mitigate noise. Also, I'll point out that XMC typically runs at much higher frequency than MPC—1 to 5 seconds—so that noise averaging, especially in light of its minimized moves in the first place, also renders noise less of an issue, just as it does in PID and RPC. The right combination of signal noise, controller execution frequency, and process speed could make filtering an issue in some cases, but I think the knobs are there to treat it, and it has not posed a problem as yet.
The rate of change calculation itself is simply the current value minus the old value, divided by the execution frequency times the number of past values. I always use at least two past values, due to execution frequency (or I/O scan) jitter that I have seen on some platforms. Currently, we are evaluating a high-speed compressor control application and are looking closely at this aspect, but don't expect it to be a limitation.
Greg: In the April, 2017 Control feature article, “Common automation myths debunked,” we see how the simple insertion of a dead time block in the external reset feedback of a PID can provide dead time compensation. If the dead time is from an analyzer cycle time and analysis time, a smart external reset feedback update when there is a new result can dramatically improve the performance of loop. If the dead time from the analyzer is greater than the process 63% response time, the PID gain can be increased to be the inverse of the open loop gain, enabling a single correction for a setpoint change and just 1 to 3 changes for unmeasured load disturbances. There is a counterintuitive increase in stability as the dead time from the analyzer becomes much greater than the process response time. The July 5, 2015 Control Talk Blog, “Batch and continuous control using at-line and offline analyzers,” discusses this extensive opportunity. The use of external reset feedback can stop oscillations from a slow or sticky valve, as seen in the March, 2016 Control article, “How to specify valves and positioners that don’t compromise control,” and from a slow secondary loop in cascade control as seen in the May, 2006 Control feature article, “The power of external-reset feedback.” External reset feedback opens up an incredible spectrum of possibilities including directional move suppression to prevent unnecessary crossings of the split-range point and a gradual optimization but fast getaway for valve position control, as described in the November, 2011 Control feature, “Don’t overlook PID for APC.” True external reset feedback requires the positive feedback implementation of integral action.
For PID control, we need to address the fact we live in a world where process dynamics change with operating point and operating condition. We need to use signal characterization to compensate for valve nonlinearities and pH titration curves. We need to use first-principal process relationships for scheduling PID tuning settings to compensate for the known changes in process dynamics, such as the increase in process gain and process time constant for a decrease in process feed rate for temperature and composition control in a reasonably well-mixed liquid volume.
My understanding of cause and effects, and what was really important, was gained from development and use of dynamic, first-principle process models for key unit operations throughout my career. I had to develop and program the ordinary differential equations (ODE) for material and energy balances. I recently showed how the ODE reveal simple expressions for the process gain and primary time constant, providing considerable insight as to how these key dynamic parameters change with design and operating conditions, as seen in the online Control Global paper, “First-principle relationships for process dynamics.” The exploration and discovery of more complex process relationships and dynamics can be realized by dynamic simulations, as described in the August, 2017 Control feature article, “Virtual plant virtuosity”.
Stan: How large is the XMC matrix generally?
Allan: When applying XMC—or MPC for that matter—I recommend what I call “small matrix design practice.” This doesn’t mean there is any size limitation in XMC’s “model-less” method. It means constructing the matrix—choosing the variables and interactions to include—based on existing operating practices. Another way to say this is the functional specification of the APC application is to automate the way the operating team already knows to manage constraints and optimize the process. This approach will normally result in a smaller, less dense, more intuitive and effective matrix than industry’s conventional “big matrix” practice.
The idea behind big-matrix practice, involving a plant-wide test and including nearly all variables and interactions in the matrix, was to yield a more complete and therefore better solution. But this practice has actually been the third root cause, along with changing models and a need for operational performance, of degraded MPC performance. There are good reasons the process engineer and operating team do not utilize all those other variables in the first place!
All XMC applications to date are less than 10x10. For example, one is 5x5, another is 4x9, and I have a crude oil column simulation that is 7x6, which is much smaller than probably every crude column MPC that has been implemented in industry over the past two or three decades. But I believe my simulation encompasses the variables you will actually find in service and doing daily work, if you were to survey all those applications today.
The high resource demands of MPC have left small applications all across industry untouched. The low cost and agility of XMC brings these applications into reach, especially in the chemical and petrochemical industries, which seem more disposed to the small-matrix philosophy in the first place—a practical focus on key variables, rather than all variables. Readers can learn more about the emerging “APC 2.0” paradigm on my website.
Greg: How does XMC solve multivariable optimization without knowing process gains?
Allan: I see the future of multivariable control relying on alternative sources of optimization decisions, rather than embedding a real-time optimization solver in the APC solution, which has been, along with models, one of the main sources of MPC cost and complexity.
One ready alternative is operating team knowledge. Like the matrix and move rates, the optimization solution—determining which objectives to pursue with which handles given remaining degrees of freedom—is largely second nature to process engineers and operators. In XMC, optimization objectives are simply configured, rather than solved.
Another resource is business-side optimization and planning tools. These are typically much more complete solutions and the results do not really change in real time. So business-side optimization results can flow via the computer system or chain of command down to operators and the control system. This is actually already common today. The job of APC is to pursue these objectives in the automation domain where the process values themselves do change in real time.
Greg: What about two-point distillation column control?
Allan: This is an interesting topic among control engineers reaching back to the earliest days of advanced process control. More practical people have jumped into the discussion, as have more theoretical authorities. Notwithstanding many reported success stories, I instinctively avoid dual (top and bottom) composition control of distillation columns. Many people seem to think of the top and bottom as two separate parts of the process, but they are pretty tightly connected, and managing that interaction under closed-loop control—especially in light of the dynamic nature of actual process gains, as you mentioned upfront—is usually impractical, if not impossible. An alert observer can often spy the leftover wreckage of historical attempts to implement dual composition control in abandoned DCS configurations and MPC matrices—the controls are often there, but almost never are both in cascade mode. RPC does not really change this situation, other than to address the reality of unknown (changing) gains on each end independently. My “go-to” control strategy for distillation columns is composition control on one end and ratio control on the other, whether using PID or RPC. Dual control is feasible where there is a side draw, which can serve to break the top/bottom interaction, such as on a crude oil column.
Greg: So far as interaction problems, you could use relative gain analysis (RGA) as we do for PID to determine the best pairing of controlled variables and manipulated variables. We also do decoupling, like we do via the feedforward summer for the PID.
While testing to develop online models takes time and is potentially disruptive, I like models because they tell me how to improve the ultimate limits to loop performance and the tuning settings and feedforward dynamic compensation needed. The integrated and peak errors for unmeasured load disturbances depend upon the total loop dead time, primary process time constant, and process gain, as detailed in the November 1, 2016 Control Talk blog, “PID options and solutions—parts 2 and 3.” These metrics give me the motivation and details for improvements in the dynamics of the process, equipment, piping and valves. If the total loop dead time was zero, I would be out of a job, because the ultimate limit to the peak and integrated errors are zero, assuming no noise, resolution limits, stiction or backlash. For zero dead time, there is no high limit on the PID gain or low limit on the reset time for stability. My job is secure because zero dead time only exists in studies where someone wants to show how great their algorithm performs. See the May 8, 2017 Control Talk blog, “Deadtime, the simple easy key to better control,” for more details.
Allan: While models are required to ascertain limits of stability, that does not necessarily mean they should form the basis of the online control solution, especially in light of changing process gains and the priority of reliable operational performance. Process control should assure process stability and reliable operational performance regardless of actual, often changing, process gain. Even where gain is fairly static, consistent operational performance is often a priority over a fine-tuned controller based on a model.
The University of Wyoming research project explored limits of stability analysis. A key finding (as claimed in the patent) is that RPC is "inherently stable" (barring dead time) for any value of process gain and move rate, and in fact is inherently stable even with regard to prediction time (even if prediction time is set to zero, the result is sustained oscillations, not increasing oscillations, which by traditional criteria is still technically stable). Nobody would ever want this in operation, but why would anybody set prediction time to zero? RPC does not allow a zero prediction time setting.
Stan: We see here an opportunity for RPC and XMC to do the largely missing optimization for most processes without models. The inherently adaptive and intuitively simple algorithm plus the use of online process metrics showing the dollar value of increases in process capacity and efficiency, can help revitalize our profession by showing the benefits achieved by process control.
Top 10 things you don’t want to hear about temperature loops
(10) We partially withdrew sensors in their thermowells to get smoother response
(9) We installed the transmitters in the I/O room for coordinated maintenance
(8) We installed the sensors near the E&I shop for quicker maintenance
(7) We saved money by going to thermocouple and RTD input cards
(6) We used clamp-on sensors to avoid the cost of pipe nozzles
(5) We standardized on thick, short thermowells to prevent vibration failures
(4) We made all the temperature measurement scales the same
(3) The controller reset time is smaller than the controller gain
(2) We chose the column tray for control that always drew a straight line
(1) The controller input and output are drawing a straight line