Common automation myths debunked

April 14, 2017
Getting to the truth about deadtime, rangeability, valve performance, positioners, lambda tuning, controller gain, fuzzy logic and neural networks

There are a lot of misconceptions in the application of automation systems in the process industries. Sometimes these errors in thinking don’t cause much harm because good design practices keep the user out of trouble, but they keep us from the truth and a better understanding. Years of allegiance may be attached to these false ideas without analysis, putting them in the category of myths. Here, I expose the common myths with hope that we can move on to address the real issues in process control.

Myth: Deadtime-dominant processes should use model-predictive control (MPC)

Deadtime-dominant processes can achieve tighter control by using Shinskey’s PID plus deadtime compensator (PID+TD), achieved by simply inserting a deadtime block in the external reset path and tuning the PID much more aggressively, particularly in terms of reducing the reset time.

If the PID tuning is left at its original value before deadtime compensation, loop performance may actually be lower after deadtime compensation. While a Smith Predictor can achieve a similar improvement, PID+TD only needs identification of the total loop deadtime for deadtime compensation, whereas the Smith Predictor requires this plus identification of the open-loop gain and open-loop time constant. Note that the deadtime can be updated in a PID+TC but not necessary in an MPC, which is particularly important when the deadtime is extremely variable (e.g., transportation delay).

The inability of users and some tuning software to identify the dynamic compensation for feedforward signals has pushed applications into using MPC because the dynamics of disturbances are automatically identified and included in the dynamic matrix. For deadtime-dominant systems, the time constants (lags) in the manipulated variable and disturbance variable path are similar and small. This reduces the need for a lead/lag block in the dynamic compensation of the of the feedforward signal normally used to make sure the feedforward correction arrives at a common point in the process at the same time as the disturbance. All that's needed to ensure feedforward does not arrive too soon is simply inserting a deadtime block in the feedforward signal.

It's not commonly appreciated that lag-dominant processes can potentially see the greatest improvement from a PID+TD or MPC. Since PID control is often excellent for these loops, if the process is treated as near-integrating and integrating process tuning rules are used, there's not as much need seen for something more than a simple PID. The real need for MPC shows up for compound complex process dynamics, exact feedforward dynamic compensation, full decoupling of interactions, constraint control and optimization.

Deadtime-dominant processes are more sensitive to a misidentification of the total loop deadtime, with the sensitivity being greater for a deadtime estimate that's too large rather than too small. For plain old PID, an estimate of a deadtime that's too large just leads to sluggish control. In PID+TD and MPC, an estimated deadtime that's too large can cause oscillations.

The best solution for deadtime-dominant processes is to try to reduce the total deadtime. If most of the deadtime comes from an analyzer or wireless device, a faster cycle time and update time will greatly improve the integrated error by enabling the PID to see disturbances sooner.

Also, an enhanced PID that uses external reset feedback and doesn't update the controller output until the process variable or setpoint changes can greatly simplify tuning by making it more aggressive and independent of changes in the process time constants. The PID gain can be as large as the inverse of the dimensionless open-loop gain for self-regulating processes, and the reset time can be reduced to be the other sources of deadtime, often due to automation system and transportation delays. For example, if the open-loop gain is 2, and the transportation source of deadtime is 20 seconds, the controller gain setting can be as large as 0.5 and the integral time (reset time) setting can be as small as 20 seconds. For more on this enhanced PID, see the July 16, 2015 “Control Talk” blog, “Batch and Continuous Control with At-Line and Offline Analyzer Tips.” For much more about deadtime-dominant processes, see the Dec. 1, 2016, “Control Talk” blog, “Deadtime Dominance—Sources, Consequences and Solutions.”

A side issue here is that you can prove almost any point, including the benefit of a special algorithm, by not testing the controller for unmeasured disturbances that can arrive anytime in the controller execution time, not tuning the PID as aggressively as possible, and not using options, such as adaptive tuning and external reset feedback, enabling the PID+TD and enhanced PID.

Myth: Valve rangeability is determined by accuracy of inherent flow characteristic

The traditional definition of valve rangeability being determined by how accurately the inherent flow characteristic matches a theoretical flow characteristic is largely bogus and a distraction at best. The process controller will correct for any mismatch and there are much greater issues. The actual flow rangeability depends on the installed flow characteristic, and backlash and stiction near the closed position. As less of the system pressure drop is allocated to the control valve to save on energy, the installed flow characteristic of a linear trim valve distorts to quick-opening. The backlash and stiction that's greatest near the seat, and expressed as a percent of total valve stroke, translates to greater error in the flow due to the steep installed flow characteristic near the closed position.

The amplitude of limit cycles from stiction is increased due to the greater valve gain (and hence, open-loop gain) that's the product of the valve gain, process gain and measurement gain. The amplitude of limit cycles from backlash is greater because the controller gain must be reduced due to the larger open-loop gain. Limit cycles develop from stiction if there are one or more integrators in the loop that includes the process, valve positioner and PID controller. Limit cycles develop from backlash if there are two or more integrators in the loop. Processes that have an integrator (integrating process response) include level, gas pressure and batch composition.

The installed flow characteristic of an equal-percentage valve gets flatter, but the minimum flow coefficient gets larger, as the portion of the system pressure drop allocated to the valve gets smaller. Thus, the best rangeability is achieved by a control valve with larger portion of the system drop allocated as valve pressure drop and a sliding stem valve with a sensitive diaphragm actuator and a sensitive, well-tuned smart positioner (e.g., a real throttling control valve), which sets us up for the next two myths.

Myth: High-performance valves give high performance

The term “high performance” has been used for control valves with minimum leakage. These valves are often on-off or isolation valves posing as control valves. They have very low leakage but crummy response due to the higher seat and seal friction near the closed position, higher backlash from linkages for rotary valves, and the types of piston actuators. Often, the positioner is being lied to because the positioner sees actuator shaft position and not the position of the internal flow element (e.g., plug, disk or ball).

The positioners used often have poor sensitivity that shows up as an order-of-magnitude or greater increase in 86% response time (T86) for small steps in valve signal, particularly when they're reversed (e.g., T86 of 80 seconds for 0.2% step versus 4 seconds for 20% step). Unfortunately, valve specifications have nothing about valve response (e.g., deadband, resolution and T86) or its effect on rangeability. What's on the valve specification is leakage. To make matters worse, these so called high-performance valves are less expensive than a real throttling control valve.

Myth: Valve positioners should use integral action

Positioners have traditionally been high-gain, proportional-only controllers. If a high-gain, sensitive, pneumatic relay is used, position control can be quite tight since the offset from setpoint for a proportional-only controller is inversely proportional to the controller gain. The offset is also of little consequence, since the effect is rather minor and short-term, with the process controller correcting the offset. What the process controller needs is an immediate, fast, total response. There are much larger nonlinearities and offsets that the process controller has to deal with.

Want to read more from Greg? Read his Control Talk blog.

The original idea of cascade control is to make the inner loop (in this case, the positioner) as fast as possible by maximizing inner controller gain, which means going to proportional or proportional-plus-derivative control. Integral action in the inner loop is hurtful unless we're talking a secondary flow loop for ratio control or feedforward control. The advent of smarter positioners has led to much more complex control algorithms that include integral action. The use of integral action may make the valve step response tests look better, with the final position more closely matching the signal. Not realized is that the positioner gain had to be reduced, and that integral action in the positioner increases the instances of limit cycles.

In fact, with the process controller in manual (positioner signal constant), a limit cycle will develop from stiction in the valve unless an integral deadband is set. Also, the increase in the number of integrators in the control system means that the process controller with integral action will develop a limit cycle from backlash since there are now two integrators. So here we have the common situation where an attempt to make appearances look better has created a problem. Many positioners now come with the integral action turned on as a default.

For much more on these myths about control valves, see the May 1, 2016 “Control Talk” blog, “Sizing Up Valve Sizing Opportunities,” and the article, “How to specify valves and positioners that don’t compromise control.”

Myth: Lambda tuning does not work for lag-dominant processes

This myth results from misconceptions that lambda factor and self-regulating process tuning rules are used. Practitioners experienced with lambda tuning use lambda rather than a lambda factor. Lambda is set relative to the total loop deadtime, and integrating process tuning rules are used with lambda being the arrest time of load disturbance for lag-dominant processes. Setting lambda equal to or even as small as about 60% of the total loop deadtime will minimize the integrated absolute error (IAE) for a load disturbance if the dynamics are perfectly known and constant. While this is rarely the case in industrial applications, it can be used for studies that attempt to reveal other tuning methods are not really much better than lambda tuning in terms of an IAE.

The integrating process tuning rules also prevent the violation of the low-gain limit that sets us up for the next myth. For much more on the gamesmanship and egos involved in controller tuning, see the “So Many Tuning Rules, So Little Time” whitepaper.

Myth: High controller gain is the cause of most problematic oscillations

This myth originates from intuition and our control theory classes that taught us how setting the controller gain too high can create instability. In industrial applications, this is rarely the case. Even in pH control, oscillations are limited by the flatter portions of the titration curve. The real problem in most of the more important processes for composition, pH and temperature control of vessels and columns is the result of a reset time that's orders of magnitude too small. Often, for these processes, the best thing to do from the start is to increase the reset time by a factor of 1,000 or more, and see if the oscillations decay or stop. If the process smoothes out, you can then try increasing the controller gain.

Most of these loops are violating the low controller gain limit that exists when integral action is used on lag-dominant, integrating and runaway processes. In fact, for runaway processes where there's positive feedback in the process (e.g., highly exothermic reactors), the low gain limit exists even if integral action isn't used. The oscillations from violating the low gain limit are much more problematic because they're larger and slower, being less effectively filtered by downstream volumes. For much more on this and other problems, see the Nov. 1, 2016 “Control Talk” blog, “PID Options and Solutions—Parts 2 and 3.”

Myth: Fuzzy logic control (FLC) is better than PID control for a temperature setpoint response

FLC standalone temperature controllers have been developed and publicized for providing a much better setpoint response. Not realized are the many PID options, including two degrees of freedom (2DOF) PID structure and setpoint lead-lag and adaptive tuning that can enable PID to do as well as or better than FLC. Plus, the conventional PID can be readily tuned for best load disturbance rejection based on an identified process model. The only case where perhaps FLC should be used is if a dynamic model of the process can't be identified and for some reason, even a relay oscillation tuner can't find an ultimate gain or ultimate period, and a heuristic method can't be used for tuning the PID. Such cases are very rare. It's more likely that the user gets into trouble because the FLC is usually less robust, and so it's more difficult to understand what's going on and how to tune it. Plus, using a special standalone controller is not a good idea from the viewpoint of support and coordination with process and control system modes.

Myth: Artificial neural networks (ANN) provide robust estimators of process variables

ANNs suffer like FLCs from a lack of understanding what's truly happing with the algorithm, and the lack of ability to directly incorporate knowledge of the open-loop time constant for lag-dominant continuous processes. The dynamic compensation advocated is to simply insert a deadtime on each ANN input, but this is incomplete and is quite a task for inputs at the beginning of a process to predict outputs at the end of the process. The number of ANN inputs is often quite large due to the attractive belief that you can just dump historian data into the training and use the ANN. Also, the ANN doesn't use deviation variables, but total absolute values that lead to misidentification of nonlinearities. The ANN also is subject to correlations between input variables, and may create bizarre results for process operation outside the data used to train the ANN.

While dynamic compensation isn't needed for batch processes, using data analytics (multivariate statistical process control) is more productive because correlations between inputs are eliminated by principal component analysis (PCA); the user can drill down to see the relative contribution of a process input to a principal component; extrapolation for process operation outside of training is linear; and the nonlinear profile of a batch is addressed by a piecewise linear fit.

For a better understanding of why I'm not an advocate of using an ANN, see the Dec. 26, 2016 “Control Talk” blog, “Keys to Successful Process Control Technologies.” There may be an opportunity for ANN on plug flow unit continuous nonlinear operations where the dynamic compensation is completely addressed by simple insertion of a transportation delay, assuming this is constant or the deadtime on the ANN inputs can be written to.

Myth: Process gain is the process gain, process time constant is the process time constant, and process deadtime is the process deadtime

What is commonly referred to as the process gain is really an open-loop gain that's the product of the final control element gain (e.g., valve gain), process gain (possibly including a flow ratio gain), and a measurement gain (set by controller scale). The open-loop gain is dimensionless for self-regulating processes, and has inverse time units (e.g., 1/sec) for integrating processes.

Also, the gain is not directly the slope of an installed flow characteristic or process curve, but is the change in flow or process variable output divided by change in input (seen as the slope of the line segment connecting the start point and end point). Thus, gain depends more than expected on the step size, and the curve slope is only the gain for extremely small changes in the input.

What's commonly termed the process time constant is really the open-loop primary time constant, which is the largest time constant in the loop, and can originate in the final control element, process, sensor, transmitter damping or signal filter. Also, this time constant is rarely constant when it's associated with the valve, process or sensor. Thermowells with loose-fitting sensors and coated or aged pH electrodes are notorious for creating large and variable time constants.

What's often commonly said to be the process deadtime is really the total loop deadtime, which is the sum of the deadtimes in the final control element (e.g., valve pre-stroke deadtime), process (e.g., equipment and piping), measurement (e.g., transportation delay to sensor, sensor delay, transmitter update rate and analyzer cycle time), and controller (e.g., I/O execution rate and PID execution rate). Even more insidious and difficult to estimate is the deadtime from the time it takes a signal to get through a deadband or resolution limit in a final control element or measurement.

For more on the proper understanding and use of these terms, see the Aug. 24, 2015 “Control Talk” blog, “Understanding Terminology to Advance Yourself and the Automation Profession.”

While there may be grain of truth in the common myths, the wishful, cursory thinking representative of these myths indicates a lack of understanding, communication and partnerships between industry and the universities where students often first learn about process control. Eliminating myths can help us focus on what's really important to more efficiently improve process operation and gain recognition at a time when they’re sorely needed due to diminishing resources, less allocation of time to and lower appreciation of process control improvement opportunities. We no longer have the luxury of distractions. In my career, experimentation with a virtual plant enabled me to expose these myths and develop and demonstrate the value of innovations. This can be your key as well to finding and making the most out of the truth. 

About the Author

Greg McMillan | Columnist

Greg K. McMillan captures the wisdom of talented leaders in process control and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams and Top 10 lists.