Investigators use simulators to test control algorithms. Simulations are also useful to test methods for steady-state detection, data reconciliation, dynamic model generation, end-point forecasting, statistical analysis and many ancillary techniques related to automation. Challenge problems that are simple to understand and implement are also useful as education and training exercises. And, importantly, challenge problems that originate from the practice can become a guide toward grounding academic research in reality.
The 1970s brought a significant interest in industrial process control as industry switched from analog, single-loop controllers to digital control, and as vendors introduced advanced process control. At that time, several challenge problems were offered by industry to help guide academic research and industrial product development. Several benchmarks are now named after the company provider. However, some of the challenge problems are fairly complicated, and most are linearized, deterministic and stationary. At the time these were created, the issues revealed by the problems were appropriate for the R&D community, and provided a welcomed direction for investigators. Since then, techniques have developed, and what used to be the challenge is not so much today. Today, new problems that include today’s challenges for automation of chemical processes (control and ancillary techniques for online analysis and action) will be more appropriate to guide research and development.
Simple test cases are desirable, and are appropriate for many R&D purposes. However, often, there is need to use more complicated simulations for credible technique testing.
My aim in this column is to solicit challenge problems and metrics for evaluating automation techniques from my friends in the process industry. I plan on compiling techniques and presenting them to the R&D community for today’s research. I welcome you to visit www.r3eda.com to see several challenge problems posted under the “Control” menu. The code is open and you are free to use them. And, I also welcome you to send me an email at [email protected] with the description of an automation issue, challenge problem, key performance indicators, and testing conditions. Be sure that the information can be placed in the open literature, is relatively simple to understand and implement, will support credible testing, and the originator is fully acknowledged.
Challenging attributes of chemical processes
Process simulators for chemical process control exploration should include attributes characteristic of the applications. Some of those are:
- Nonlinear
- Properties change in time
- Disturbances and noise
- Degree of freedom—extra manipulated variables, auxiliary variables, constraints
- Multivariable and interactive
- Process-model mismatch
- Faults—spurious signals, sensor error, process and control system faults
- Difficult behaviors—inverse action, integrating, unstable
- Inexpensive computers and simplicity requirements
- Batch and semi-batch, as well as continuous
- Transitions—startup, shutdown, between grades, filling
- Override, tuning, switches with bumpless operation
Feel free to email me other aspects that you find are also issues for automation. Since I don’t think simple-to-implement problems will include each of those aspects, I suspect that it will require several benchmark challenge problems for comprehensive testing and development of process control and ancillary techniques.
Criteria for assessing techniques
How should techniques be evaluated? This has two aspects—one is how the tests should be performed, and the second is the choice of key performance indicators (KPI) that are chosen to relate to desirable and undesirable aspects.
For instance, how to test? Classic tuning criteria for PID controllers are based on step testing of the setpoint, and controllers are often evaluated on how well they make the process follow the setpoint. However, most chemical processes remain at a single setpoint, and the main controller job is to reject continually active random disturbances. Is a more appropriate test based on regulatory performance? If so, should it be the response to a single step-and-hold of a disturbance, or a longer simulation of continually changing disturbances?
Second, what metrics should be used to assess goodness of control? Classically, measuring overshoot, settling time, integral of the squared error and similar controlled variable metrics have been used to assess controllers. But, many other criteria could be more appropriate, and perhaps should be commonly used by those in the R&D community. One of these alternate KPIs is the fluctuation amplitude of the controlled variable (as measured by range, or variance, or standard deviation) during a continually perturbed regulatory period. That metric relates to quality give-away (closeness of the set point to the specification to prevent off-spec events). Another KPI might be the manipulated variable work that is required for control. How should this be assessed? Yet, another important metric would be measures of any violation of a constraint or specification.
Such CV and MV KPIs are common, but many other criteria may be as important in assessing desirability of a controller or other automation technique. One is the operator and engineering requirements to implement, initialize and maintain the technique. This would include the technical/mathematical skill required, and complexity of the procedure. How important is simplicity, and how can it be quantified?
The cost of the system (both the process and the automation elements) is also important. The process and the controller interact, so an analysis should not consider just one subsystem. For instance, even a linear PI controller can do an adequate job with nonlinear pH control with a large enough neutralization tank to damp-out disturbances. But, a large tank is expensive. A more expensive controller may permit a smaller neutralization tank, and could have a net reduction in cost. Similarly, cascade and ratio control are more costly than simple than primitive control, but they permit the lower cost and operating expense associated with process intensification. Costs include initial installed equipment cost, annual maintenance and operational costs, costs associated with off-grade product and EHSLP events, and costs associated with extensive process testing and front-end engineering. What are other economic factors? How should they be assessed?
I look forward to receiving your input.