Model Predictive Control - Past, Present and Future - Part 2

McMillan, Winer and Darby Discuss Model Development and Tuning

By Greg McMillan, Stan Weiner

1 of 2 < 1 | 2 View on one page

Stan: In this multipart part series, Mark will share his thoughts on the scope of model predictive control (MPC) applications. The focus in Part 2 is on model development and tuning, but we are free to roam. Since models are only as good as the data, what are recommendations for testing?

See Also: Model Predictive Control - Past, Present and Future, Part 1

Mark: You need to test at not only the nominal operating point, but also near expected constraint limits. Pre-tests are an accepted practice to get step sizes and time horizons. We also need to verify that the scan time is fast enough, and the trigger level is small enough for data historians and wireless transmitters. We prefer that compression and filters be removed so we can get raw data. A separate data collection to get around these limitations is commonly used.

Greg: For us the pre-tests, which we called bump tests, were separate steps to one manipulated variable at a time held long enough to see 98% of the response for self-regulating processes or to see a constant ramp rate for integrating processes. We would often find improvements that needed to be made in instruments or control valves before we did the tests to build the model. For example, we would find that valves did not respond to steps less than 5%, or that a process response less than 1% was distorted by noise. As a rule of thumb, we wanted a step size larger than five times the valve dead band that gave a process response five times larger than the measurement noise band. We would normally sit down with operations and the process engineer to get an idea of permissible step sizes. We wanted the largest possible step, but since you are not on closed-loop control, you need to be careful you don't drive the process to an undesirable state. However, often the first opinion in the control room is rather conservative, so you need to look at the typical changes in the manipulated variable of a trend chart and have a respectful, realistic conversation. What type of automated testing is used?

Mark: Manual methods are still used, but automatic testing approaches are increasingly being applied. These use random sequences such as a pseudo random binary sequence (PRBS) or generalized binary noise (GBN). Closed-loop testing approaches based on a preliminary model are also being applied as a means of keeping the process in an acceptable range and reducing the effort involved in plant testing, both for the initial application and to update the MPC model for changes in equipment or operating conditions. Closed-loop testing continues to be an active development area for the MPC suppliers.

Stan: If the testing and some of the model analysis is automated, what does the process control engineer need to do?

Mark: The intent of these enhancements is to simplify the tasks of the control engineer, not remove his involvement. The control engineer is required to make decisions regarding test specifics such as move sizes and frequency, and to determine when testing can be terminated. In model identification, decisions must be made as to which data and variables to use. Then, which models are going into the controller. For example, should you include a weak model? Obviously, this requires process knowledge and MPC expertise. A sharp person mentored by a gray hair can lead projects after several years of applications experience.

Greg: Getting back to the data, what is most important?

Mark: You want good, rich data, meaning significant movement in the manipulated variables at varying step durations, to get accurate models. But it does not end there. You need to look for consistency in the resulting models. Use engineering knowledge and available models or simulators to confirm or modify gains. Don't shortchange this step. Gain ratios are very important, especially for larger controllers. Empirical identification does not enforce relationships like material balances, so there can be fictional degrees of freedom (DOFs) that the MPC steady-state optimizer—either a linear program (LP) or quadratic program (QP)—may exploit. As discussed previously, techniques are available now to assist with this analysis and adjust gains to improve the model conditioning, which frees up the engineer to take a higher level supervisory role.

Stan: How do you get the MPC ready to go on-line?

See Also: Under the Hood with MPC

Mark: Off-line tuning relies on the built-in simulator. Most important is getting the steady-state behavior of the controller right. Simulation can also serve to identify errors on the model gains, for example, observing an MV moving in the wrong direction at steady state. You want initial tuning for the dynamic parameters to be in the ball park. Regarding steady state, you determine how you want to the MPC push manipulated and constraint variables based on cost factors and priorities, making sure you are enforcing the right constraints. Unlike override control, which is sequential picking one constraint, the optimization is simultaneous, multivariable and predictive, taking into account future violations. Some MPCs use move suppression (penalty on move), some, a reference trajectory to affect manipulated variable (MV) aggressiveness. Penalty on error is used for both constraint or quality variables (QV) and controlled variables (CV). We have evolved to not distinguish between QV and CV except as presented to the operator.

1 of 2 < 1 | 2 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments