In this multipart part series, Mark will share his thoughts on the scope of model predictive control (MPC) applications. The focus in Part 2 is on model development and tuning, but we are free to roam. Since models are only as good as the data, what are recommendations for testing?
See Also: Model Predictive Control - Past, Present and Future, Part 1
Mark: You need to test at not only the nominal operating point, but also near expected constraint limits. Pre-tests are an accepted practice to get step sizes and time horizons. We also need to verify that the scan time is fast enough, and the trigger level is small enough for data historians and wireless transmitters. We prefer that compression and filters be removed so we can get raw data. A separate data collection to get around these limitations is commonly used.
Greg: For us the pre-tests, which we called bump tests, were separate steps to one manipulated variable at a time held long enough to see 98% of the response for self-regulating processes or to see a constant ramp rate for integrating processes. We would often find improvements that needed to be made in instruments or control valves before we did the tests to build the model. For example, we would find that valves did not respond to steps less than 5%, or that a process response less than 1% was distorted by noise. As a rule of thumb, we wanted a step size larger than five times the valve dead band that gave a process response five times larger than the measurement noise band. We would normally sit down with operations and the process engineer to get an idea of permissible step sizes. We wanted the largest possible step, but since you are not on closed-loop control, you need to be careful you don't drive the process to an undesirable state. However, often the first opinion in the control room is rather conservative, so you need to look at the typical changes in the manipulated variable of a trend chart and have a respectful, realistic conversation. What type of automated testing is used?
Mark: Manual methods are still used, but automatic testing approaches are increasingly being applied. These use random sequences such as a pseudo random binary sequence (PRBS) or generalized binary noise (GBN). Closed-loop testing approaches based on a preliminary model are also being applied as a means of keeping the process in an acceptable range and reducing the effort involved in plant testing, both for the initial application and to update the MPC model for changes in equipment or operating conditions. Closed-loop testing continues to be an active development area for the MPC suppliers.
Stan: If the testing and some of the model analysis is automated, what does the process control engineer need to do?
Mark: The intent of these enhancements is to simplify the tasks of the control engineer, not remove his involvement. The control engineer is required to make decisions regarding test specifics such as move sizes and frequency, and to determine when testing can be terminated. In model identification, decisions must be made as to which data and variables to use. Then, which models are going into the controller. For example, should you include a weak model? Obviously, this requires process knowledge and MPC expertise. A sharp person mentored by a gray hair can lead projects after several years of applications experience.
Greg: Getting back to the data, what is most important?
Mark: You want good, rich data, meaning significant movement in the manipulated variables at varying step durations, to get accurate models. But it does not end there. You need to look for consistency in the resulting models. Use engineering knowledge and available models or simulators to confirm or modify gains. Don't shortchange this step. Gain ratios are very important, especially for larger controllers. Empirical identification does not enforce relationships like material balances, so there can be fictional degrees of freedom (DOFs) that the MPC steady-state optimizer—either a linear program (LP) or quadratic program (QP)—may exploit. As discussed previously, techniques are available now to assist with this analysis and adjust gains to improve the model conditioning, which frees up the engineer to take a higher level supervisory role.
Stan: How do you get the MPC ready to go on-line?
See Also: Under the Hood with MPC
Mark: Off-line tuning relies on the built-in simulator. Most important is getting the steady-state behavior of the controller right. Simulation can also serve to identify errors on the model gains, for example, observing an MV moving in the wrong direction at steady state. You want initial tuning for the dynamic parameters to be in the ball park. Regarding steady state, you determine how you want to the MPC push manipulated and constraint variables based on cost factors and priorities, making sure you are enforcing the right constraints. Unlike override control, which is sequential picking one constraint, the optimization is simultaneous, multivariable and predictive, taking into account future violations. Some MPCs use move suppression (penalty on move), some, a reference trajectory to affect manipulated variable (MV) aggressiveness. Penalty on error is used for both constraint or quality variables (QV) and controlled variables (CV). We have evolved to not distinguish between QV and CV except as presented to the operator.
Greg: What kind of expertise do companies have and need?
Mark: Some onsite expertise with virtual access or revisits by external expertise is generally the approach for most plants. Some control folk left the suppliers and are working for operating companies. Large companies with a good history of MPC have gotten good at it. In general, basic and advanced process control groups got hurt in the 1990s. It used to be that management were practitioners who advanced through the ranks understanding and appreciating the technology and the expertise. Now it is a mixed bag, and you may need to convince management of the resource requirement.
Stan: How can you reduce the time horizon to reduce test time and provide better short-term resolution of fast dynamics for a given number of data points over the horizon?
Mark: Regulatory design impacts the settling time of the MPC controller. An example is having the setpoint of a temperature cascade control loop for a distillation column as a manipulated variable. Controlling levels associated with large holdups in the MPC can also reduce the settling time, although this is normally done to provide better constraint control, for example, by directly manipulating a vessel outflow to a downstream unit. If you don't need to handle the level control for constraint control/coordination, keeping the level in the regulatory control system is fine. If a process variable has a very large time constant, modeling the variable as integrating instead of a self-regulating can dramatically shorten the time horizon. Depending on the particular MPC, the integrator approach may take up a DOF in the LP or QP optimizer.
See Also: When do I Use MPC instead of PID for Advanced Regulatory Control - Tips?
Greg: PID controllers can be tuned for extremely tight level, pressure and temperature control, which may be advantageous for residence time, energy balance and material balance control. My general rule of thumb is that if the PID controller justifiably has a gain greater than 10 or a rate time greater than 1 minute, the loop might be best left as a PID controller unless the manipulated variable is needed for constraint control. I have found that treating loops with a large time constant as near-integrators can shorten the tuning test time by 96%, making tests less vulnerable to disturbances and less disruptive. The result is an opportunity to do more tests and greater buy-in by operations. Also, this approach enables the use of lambda tuning for integrating processes that provides significantly better disturbance rejection because the reset time becomes a function of dead time rather than time constant. The low limit on the product of gain and reset time settings is also enforced, preventing slow rolling oscillations common from too low a gain or reset time. In general, a PID controller can do a fine job of tight or loose level control. For loose level control, lambda can be set to absorb process variability in the level as noted in the January Control Talk "Tuning to Meet Process Objectives"), PID loop tuning depends upon getting a dynamic model of the process. Since the response in the first four dead times is more important than knowledge of the steady state gain, the near integrator approximation is a natural of matching test to tuning requirements. For PID loops we have auto tuners and adaptive control. How do you tune an MPC once it goes online?
Mark: You may need to revisit constraint priorities, but hopefully you've got most of these priorities right in the simulator. Tuning then becomes one of setting weights to get the right trade- off between tightness of control vs. manipulated movement. Note that you can't tune your way around a poor model like you might do in PID for inadequate knowledge of process dynamics. You can't just increase move suppression. The steady state part can still give you grief. It is not unusual during online tuning to realize you have model problems causing you to revisit your model choices.
Stan: We can write to the tuning parameters and the dead time for a PID in the modern day without bumping the output enabling the scheduling of tuning, adaptive control, and dead time compensation. If an instrument, valve or analyzer is behaving badly, we can put a loop in manual, and if the valve still sort of works, the operator can do a degree of manual control. What can you do in the MPC to deal with changes in dynamics and problems with measurements and final control elements till fixed?
Mark: An under-appreciated point is that most MPC packages allow customization, for example, the capability to switch a model or write to model and tuning parameters, and this is often used. Process gains or a multiplier can typically be accessed. Static transformation of controlled and manipulated variables is also a standard feature; a popular option is piece-wise linearization functionality. You can turn off a section of the MPC where a controlled variable or manipulated variable is unavailable.
Greg: Some MPCs are adding the ability to write to the model dead time, opening the door for MPC to be used for cross-directional (CD) thickness control. The dead time changes with sheet or film speed. The most popular software for CD control presently uses decoupled Smith predictors. If you have 128 die bolts for CD control, you have 128 Smith predictors to tune, kind of a nightmare. Unless special expertise is brought in, the CD control often runs at factory settings.
Stan: Some MPCs can execute as fast as one second, creating opportunities to use MPC for decoupling and optimization of relatively fast loops where a PID execution time does not need to be less than one second. Pressure control of liquids, polymers, compressors and furnaces require execution times much faster than one second and are not candidates for even this faster MPC.
Greg: Here are some more advanced control myths.
Myth 3 – You need consultants to maintain MPC. No longer true. The features and ease of use of new software enable the user to get much more involved, which is critical to ensuring that the plant gets the most value out of the models. Previously, the benefits started to decline as soon as the consultants left the job site. Now the user can tune, troubleshoot and update the models. The myth can be perpetuated if the user is not involved in the project and does not follow the controller and process to provide the first line of support.
Myth 4 – A good MPC can remove the operator from the picture. The operators are the biggest constraint in most plants. Even if the models are perfect, operators will take the MPCs off-line if they don't understand them. The new guy in town is always suspect, so the first time an operational problem occurs and no one is around to answer questions, the MPC will be blamed, even if the MPC is doing the right thing. Training sessions and displays should focus on showing the individual contribution of the trajectories from each controlled and disturbance variable in relation to the observed changes in the manipulated variables.