Interested in linking to "Cascade, Scan Time, PID Tuning"?
You may use the Headline, Deck, Byline and URL of this article on your Web site. To link to this article, select and copy the HTML code below and paste it on your own Web site.
My question is this: When should you "slow down" a PID loop by making the loop update time (and in the case of our controller, the task scan rate) such that the loop executes less often? Can you compare and contrast this with so called de-tuning where, for example, the integral term is made less assertive? This is a really gray area for us in terms of our understanding. In fact we're not even sure we are using the right terminology to ask the question. When does the first one (less frequent scanning/update) benefit you in terms of reducing oscillation—or does it ever help with that?
Here are some other common terms/topics that I keep seeing used, but don't fully understand. I've been trying for years to refine my grasp of this stuff, and certain things remain gray:
Filtering: This gets mentioned a lot, and I have always been fuzzy on when to do this, or how. I actually saved a statement from Mr. Shinskey, warning about using filtering ("Filtering is counterproductive to control, and therefore the minimum amount of filtering is usually best in a control loop."). I wish I had a clear, concise and simple guideline and "how-to" on this.
Detuning: I keep running into this term in articles. I have a vague idea that it means lowering the gains, but this term gets used without explanation, as if I know exactly what it is, and I don't.
Analog Sample Rate: For years I have been trying to grasp why the loop update has to be five to ten times slower than the analog sampling. I know it has to do with the timing/spacing of the samples, but I cannot grasp why it matters. I liked your digital clock example. Perhaps you can use something like that to help me with this topic/concept.
A: You might not realize it, but you have asked two questions: Is my cascade control loop correctly configured, and can I eliminate cycling by increasing scan time (lowering the frequency of updating the temperature measurement)?
My answer is no to both questions.
The first is no, because (if I understand your cascade loop correctly), your slave loop controls the air cooler outlet temperature, while the master loop controls the temperature in the conditioned space. If that is the case, this is not a cascade application!
A cascade control system divides the process into two parts, where outer loop should always be slower (and/or have more dead time), and the inner loop should be faster. In your case, the opposite is the case. In your case, the inner slave loop controlling the cold water flow to the air cooler is SLOWER, because it is a heat transfer process, while the outer loop, the cascade master, setting the cold air temperature setpoint and controlling the space temperature, is fast (it is a variable dead time air transportation process). In such applications, you should not use cascade control; you should just control the cold water flow by the space temperature controller; otherwise the cascade configuration itself can cause the cycling. So, what should you do? Just have the space temperature controller throttle the chilled water valve directly and set the integral of the loop to about three times the maximum dead time.
The second answer concerning the slowing of the scan frequency to stop cycling is also no. In all control systems, we want the measurement to be as sensitive as possible, because we want to detect the changes in temperature as soon as they occur. Reducing the sampling frequency does not stabilize the loop, it just adds to its dead time and deteriorates its performance.
A: Increasing scan time in a control loop actually adds dead time to the loop. Look at a digital clock—the time you observe is on average 1/2 minute slow. And digital filtering reports the average value of the input at the end of the scan period, adding another 1/2 scan period to the loop. The result is a total of 1 scan time added to the dead time in the loop.
In a lag-dominant loop, peak deviation following a load change increases directly with dead time, and so does loop period, with the result that integrated error increases with dead time squared. Derivative gain varies with derivative time/scan time, and it should be kept around 10 for maximum effect. So the best performance requires minimum scan time.
With a sampling analyzer like a chromatograph, the loop is open between samples. Then the scan period should be matched to that of the analyzer, but they must be synchronized so that control action is taken as soon as a new analysis is reported.
There is one case where matching scan period to process dead time is beneficial: for a pure dead-time process using an integral-only controller. But if the scan period is too short, the loop will cycle uniformly. Also, the control action needs to be synchronized with any upset for maximum performance—if not, the result is no better than continuous PI control.
In cascade control, the inner loop needs to be at least three times faster than the outer loop from process dynamics alone. Detuning the master controller or slowing its sampling will not improve control. I remember once trying to control compressor discharge pressure by manipulating flow in cascade, but pressure and flow are equally fast, so the controllers fought each other. The single pressure loop worked better.