Model Predictive Control - Past, Present and Future - Part 2

McMillan, Winer and Darby Discuss Model Development and Tuning

By Greg McMillan, Stan Weiner

2 of 2 1 | 2 > View on one page

Greg: What kind of expertise do companies have and need?

Mark: Some onsite expertise with virtual access or revisits by external expertise is generally the approach for most plants. Some control folk left the suppliers and are working for operating companies. Large companies with a good history of MPC have gotten good at it. In general, basic and advanced process control groups got hurt in the 1990s. It used to be that management were practitioners who advanced through the ranks understanding and appreciating the technology and the expertise. Now it is a mixed bag, and you may need to convince management of the resource requirement.

Stan: How can you reduce the time horizon to reduce test time and provide better short-term resolution of fast dynamics for a given number of data points over the horizon?

Mark: Regulatory design impacts the settling time of the MPC controller. An example is having the setpoint of a temperature cascade control loop for a distillation column as a manipulated variable. Controlling levels associated with large holdups in the MPC can also reduce the settling time, although this is normally done to provide better constraint control, for example, by directly manipulating a vessel outflow to a downstream unit. If you don't need to handle the level control for constraint control/coordination, keeping the level in the regulatory control system is fine. If a process variable has a very large time constant, modeling the variable as integrating instead of a self-regulating can dramatically shorten the time horizon. Depending on the particular MPC, the integrator approach may take up a DOF in the LP or QP optimizer.

See Also: When do I Use MPC instead of PID for Advanced Regulatory Control - Tips?

Greg: PID controllers can be tuned for extremely tight level, pressure and temperature control, which may be advantageous for residence time, energy balance and material balance control. My general rule of thumb is that if the PID controller justifiably has a gain greater than 10 or a rate time greater than 1 minute, the loop might be best left as a PID controller unless the manipulated variable is needed for constraint control. I have found that treating loops with a large time constant as near-integrators can shorten the tuning test time by 96%, making tests less vulnerable to disturbances and less disruptive. The result is an opportunity to do more tests and greater buy-in by operations. Also, this approach enables the use of lambda tuning for integrating processes that provides significantly better disturbance rejection because the reset time becomes a function of dead time rather than time constant. The low limit on the product of gain and reset time settings is also enforced, preventing slow rolling oscillations common from too low a gain or reset time. In general, a PID controller can do a fine job of tight or loose level control. For loose level control, lambda can be set to absorb process variability in the level as noted in the January Control Talk "Tuning to Meet Process Objectives"), PID loop tuning depends upon getting a dynamic model of the process. Since the response in the first four dead times is more important than knowledge of the steady state gain, the near integrator approximation is a natural of matching test to tuning requirements. For PID loops we have auto tuners and adaptive control. How do you tune an MPC once it goes online?

Mark: You may need to revisit constraint priorities, but hopefully you've got most of these priorities right in the simulator. Tuning then becomes one of setting weights to get the right trade- off between tightness of control vs. manipulated movement. Note that you can't tune your way around a poor model like you might do in PID for inadequate knowledge of process dynamics. You can't just increase move suppression. The steady state part can still give you grief. It is not unusual during online tuning to realize you have model problems causing you to revisit your model choices.

Stan: We can write to the tuning parameters and the dead time for a PID in the modern day without bumping the output enabling the scheduling of tuning, adaptive control, and dead time compensation. If an instrument, valve or analyzer is behaving badly, we can put a loop in manual, and if the valve still sort of works, the operator can do a degree of manual control. What can you do in the MPC to deal with changes in dynamics and problems with measurements and final control elements till fixed?

Mark: An under-appreciated point is that most MPC packages allow customization, for example, the capability to switch a model or write to model and tuning parameters, and this is often used. Process gains or a multiplier can typically be accessed. Static transformation of controlled and manipulated variables is also a standard feature; a popular option is piece-wise linearization functionality. You can turn off a section of the MPC where a controlled variable or manipulated variable is unavailable.

Greg: Some MPCs are adding the ability to write to the model dead time, opening the door for MPC to be used for cross-directional (CD) thickness control. The dead time changes with sheet or film speed. The most popular software for CD control presently uses decoupled Smith predictors. If you have 128 die bolts for CD control, you have 128 Smith predictors to tune, kind of a nightmare. Unless special expertise is brought in, the CD control often runs at factory settings.

Stan: Some MPCs can execute as fast as one second, creating opportunities to use MPC for decoupling and optimization of relatively fast loops where a PID execution time does not need to be less than one second. Pressure control of liquids, polymers, compressors and furnaces require execution times much faster than one second and are not candidates for even this faster MPC.

Greg: Here are some more advanced control myths.

Myth 3 – You need consultants to maintain MPC. No longer true. The features and ease of use of new software enable the user to get much more involved, which is critical to ensuring that the plant gets the most value out of the models. Previously, the benefits started to decline as soon as the consultants left the job site. Now the user can tune, troubleshoot and update the models. The myth can be perpetuated if the user is not involved in the project and does not follow the controller and process to provide the first line of support.

Myth 4 – A good MPC can remove the operator from the picture. The operators are the biggest constraint in most plants. Even if the models are perfect, operators will take the MPCs off-line if they don't understand them. The new guy in town is always suspect, so the first time an operational problem occurs and no one is around to answer questions, the MPC will be blamed, even if the MPC is doing the right thing. Training sessions and displays should focus on showing the individual contribution of the trajectories from each controlled and disturbance variable in relation to the observed changes in the manipulated variables.

2 of 2 1 | 2 > View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments