I have a cascade control system with the master (outer) loop controlling the space temperature by adjusting the setpoint of a slave (inner loop), which controls the cold air supply temperature by throttling a chilled water flow to the air cooler.
My question is this: When should you "slow down" a PID loop by making the loop update time (and in the case of our controller, the task scan rate) such that the loop executes less often? Can you compare and contrast this with so called de-tuning where, for example, the integral term is made less assertive? This is a really gray area for us in terms of our understanding. In fact we're not even sure we are using the right terminology to ask the question. When does the first one (less frequent scanning/update) benefit you in terms of reducing oscillation—or does it ever help with that?
Here are some other common terms/topics that I keep seeing used, but don't fully understand. I've been trying for years to refine my grasp of this stuff, and certain things remain gray:
Filtering: This gets mentioned a lot, and I have always been fuzzy on when to do this, or how. I actually saved a statement from Mr. Shinskey, warning about using filtering ("Filtering is counterproductive to control, and therefore the minimum amount of filtering is usually best in a control loop."). I wish I had a clear, concise and simple guideline and "how-to" on this.
Detuning: I keep running into this term in articles. I have a vague idea that it means lowering the gains, but this term gets used without explanation, as if I know exactly what it is, and I don't.
Analog Sample Rate: For years I have been trying to grasp why the loop update has to be five to ten times slower than the analog sampling. I know it has to do with the timing/spacing of the samples, but I cannot grasp why it matters. I liked your digital clock example. Perhaps you can use something like that to help me with this topic/concept.
A: You might not realize it, but you have asked two questions: Is my cascade control loop correctly configured, and can I eliminate cycling by increasing scan time (lowering the frequency of updating the temperature measurement)?
My answer is no to both questions.
The first is no, because (if I understand your cascade loop correctly), your slave loop controls the air cooler outlet temperature, while the master loop controls the temperature in the conditioned space. If that is the case, this is not a cascade application!
A cascade control system divides the process into two parts, where outer loop should always be slower (and/or have more dead time), and the inner loop should be faster. In your case, the opposite is the case. In your case, the inner slave loop controlling the cold water flow to the air cooler is SLOWER, because it is a heat transfer process, while the outer loop, the cascade master, setting the cold air temperature setpoint and controlling the space temperature, is fast (it is a variable dead time air transportation process). In such applications, you should not use cascade control; you should just control the cold water flow by the space temperature controller; otherwise the cascade configuration itself can cause the cycling. So, what should you do? Just have the space temperature controller throttle the chilled water valve directly and set the integral of the loop to about three times the maximum dead time.
The second answer concerning the slowing of the scan frequency to stop cycling is also no. In all control systems, we want the measurement to be as sensitive as possible, because we want to detect the changes in temperature as soon as they occur. Reducing the sampling frequency does not stabilize the loop, it just adds to its dead time and deteriorates its performance.
A: Increasing scan time in a control loop actually adds dead time to the loop. Look at a digital clock—the time you observe is on average 1/2 minute slow. And digital filtering reports the average value of the input at the end of the scan period, adding another 1/2 scan period to the loop. The result is a total of 1 scan time added to the dead time in the loop.
In a lag-dominant loop, peak deviation following a load change increases directly with dead time, and so does loop period, with the result that integrated error increases with dead time squared. Derivative gain varies with derivative time/scan time, and it should be kept around 10 for maximum effect. So the best performance requires minimum scan time.
With a sampling analyzer like a chromatograph, the loop is open between samples. Then the scan period should be matched to that of the analyzer, but they must be synchronized so that control action is taken as soon as a new analysis is reported.
There is one case where matching scan period to process dead time is beneficial: for a pure dead-time process using an integral-only controller. But if the scan period is too short, the loop will cycle uniformly. Also, the control action needs to be synchronized with any upset for maximum performance—if not, the result is no better than continuous PI control.
In cascade control, the inner loop needs to be at least three times faster than the outer loop from process dynamics alone. Detuning the master controller or slowing its sampling will not improve control. I remember once trying to control compressor discharge pressure by manipulating flow in cascade, but pressure and flow are equally fast, so the controllers fought each other. The single pressure loop worked better.
Exothermic reactor outlet temperature needs to be controlled in cascade, manipulating coolant exit temperature for a stirred-tank reactor, or coolant inlet temperature for a once-through reactor. These were tested extensively in the field.
The reason for filtering in a control loop is to reduce valve wear, and my advice from a performance standpoint is that, the less filtering, the better performance in response to load changes.
There is no benefit for executing a controller faster than its process variable (PV) is updated because the loop is open during the PV scan interval, but it should be synchronized with the PV scan. Some parameters change together with flow, in which case robustness should be measured against fractional flow changes.
There is no definition of "detuning" except as the opposite of tuning. It refers to reducing loop gain below the optimum to add stability and increase robustness. Robustness is defined as the fractional change in any process parameter such as gain or dead time that will cause a loop to oscillate uniformly. Detuning is used to reduce loop interaction and to increase robustness. Integrated error increases directly with the product of proportional band and integral time, and detuning raises both. Interaction is best minimized by proper loop configuration, using relative gain analysis (RGA) as a guide, and partial decoupling where necessary. Robustness is achieved without sacrificing performance with proper valve characterization, and gain scheduling where necessary.
I'm not sure what "analog sample rate" means, except in digital filtering. A digital filter may average data over one scan period, or up to five for more smoothing. Just remember the delay associated with the average value it is reporting.
A: Increasing the scan time will add dead time (1/2 of scan time) and increase the integrated error for unmeasured disturbances. The increase in scan time will not normally change the integral contribution over a period of many scans. For fast loops where the existing loop dead time is rather small, increasing the dead time via an increase in scan time could make the oscillations significantly worse if they are due to disturbances.
Be careful about the use of scan rate versus scan time. Scan rate can be mistaken to be frequency of updates rather than time between updates.
There is a slim chance the oscillations are due to noise or analyzer cycle time. Increasing the scan time will improve the signal-to-noise ratio, helping the loop deal with disturbances. If there is a large analyzer cycle time, synchronizing the scan time with the cycle time will improve control and perhaps eliminate oscillations from the analyzer cycle time.
The simple solution is to increase the reset time. For integrating process, the product of the controller gain and reset time (sec) must be larger than twice the inverse of the integrating process gain (%PV/sec/%Out) to prevent slow rolling oscillations. Adding a setpoint rate limit in the analog output block and using external-reset feedback with the positive feedback implementation of integral can do the same thing (slow down the integral mode) without retuning. However, your PID controller may not have this option, and you need to be careful about the connection of the back-calculate signals associated with external-reset feedback.
If the oscillations are due to a limit cycle from valve backlash or stick-slip, increasing the reset time will increase the period (slow the oscillations down). If you have an integral dead band option in your PID, setting the integral dead band equal to the amplitude of the oscillations should stop integral action, stopping the limit cycle. If integral action is used in the positioner, integral dead band must be set there as well. Alternately, if you turn on external-reset feedback with the positive feedback implementation of integral action and for the back calculate signal use a fast read back of actual valve position for direct manipulation of a control valve or use the secondary loop PV for cascade control, you can stop the oscillations from backlash, stick-slip or a slow valve or slow secondary loop.
A: Dead time, transportation delay, sampling time and/or transmission delay are different words for the same phenomenon. When dead time is part of a feedback control loop, it makes the performance poorer. Dead time, by whatever name, is significant in relation to the principal time constant or period of the process or instrumentalities being controlled. For good performance it should be less than 2% of the time-constant or period. Greater delays will force reduced response performance.
Some signal delays can be anticipated by placing a signal sensor closer to the source of disturbance and computing the arrival of the disturbance at the control loop using a known or measured disturbance velocity. The signal so derived enters the control scheme via a feed-forward path and permits performance less hampered by dead time in at least one part of the control scheme.
There is no way to compensate for sampling time delay within the control loop, other than to reduce the sampling time by increasing the sampling frequency.
In process control loops the sampling time of the controller should never be an issue. It should be less than 1% of the characteristic time of the control loop. When it is more the wrong controller is in use and should be replaced by a more suitable device. The last thing you want is a controller that makes things worst. Instruments earn their keep by making things better.
Otto Muller-Girard, PE
A: It is a good idea to slow down the scan time for slow moving controllers. When the reset is more than five minutes, in many cases the controller does not need to run every second or even every 10 seconds. A common rule of thumb is that the scan time for a controller should be at least 10 times faster than the reset in minutes/repeat or rate in minutes. In my experience, the scan time almost never has to be more than 30 times faster than the reset.
You have a choice to adjust the scan time or the PV filter time. When the scan time is in a 10:30 ratio to the reset, then the PV filter time can be kept in a more typical range.
For example, if the reset is 10 minutes/repeat, then 30 times faster translates to a 20- second scan time as the maximum, and 10 times faster equates to a 1-minute scan time as a minimum.
A: Scan should be fast enough to capture pertinent information…
If too long, dead time will be increased and loop performance will degrade; you will have to detune the loop because dead time appears now longer. If too short, it is useless… and requires too much time from the CPU.
As a rule of thumb, for control loops tuned to reject disturbances, scan time should be around one-tenth of dead time. That being said, if the loop is tuned sluggishly, there is no need to use fast scan time.
Michel Ruel, P.Eng.
A: Increasing scan rate may improve stability; it won't hurt. It all depends on the time constants. Detuning will always reduce quality of control. And if dead time dominates the control loop, you must detune to improve stability.
Be careful to tune the process, not the valve.
The old rule of thumb said that when you used sampled data you had to sample five or ten times faster than the loop time constant to "keep in touch" with the situation. Lengthening out of sample rate can be a bad thing, but there is no magic number. We fear what is called "aliasing," where sampling time was some integral fraction of the sampled process time constant, and the data seen was in serious error, as the samples would see the peaks or valleys of cycles and not show the real response for a while and then drift between the peaks and valleys and present a very confused image of the process dynamics. The cure is to sample frequently enough.
Back to master/slave loops. Forgive me if this is elementary for you, but I want to go back to the basics for fear we could be thinking of different things.
I once worked in a plant where the internal temperature of jacketed kettles had to be closely controlled. Any change in the source of heating or cooling (steam and cold water) would affect the kettle temperature, and the system would never settle down. The slave loop has the duty of quickly stabilizing the source of heat or cooling and insulating the slow master controller from the faster upsets. It is usually far faster than the master loop (tens of seconds versus tens of minutes) and should be tuned for fast response. The master (vessel temperature) controller output signal, is used as the set point for the secondary, (jacket inlet temperature) loop. The sampling rate for the master controller has to be frequent enough to maintain good control.
"Detuning" in my world means reducing gain and lengthening reset time (integral) to decrease response. The penalty is obviously sluggish response and poor control. But the charts or display might look nice.
Some time ago while developing the ISA Standard 75.25 on control valve response, I worked up a computer simulation of a simple level control loop using a less-than-perfect control valve, while subjecting the loop to a forced upset. For each valve I made a number of runs, starting with the controller set at a low gain and then increased the controller to a high gain. Integrated error was computed for each run. This was repeated for another less perfect valve and so on.
The 3D plot of process error plotted against valve response and controller gain was very interesting. The difference in total error for the perfect valve versus the less than perfect valve was huge. For each valve, the minimum in the error curve showed where the controller gain was optimum for the loop for that valve. These error minimums varied a great deal. The increased error resulted from the requirement to reduce controller gain to stop cycling. A few percent difference in valve precision had a much greater impact on the quality of control than you might expect. This required a patient computer.
A: Sampling in control systems takes a whole book and at least one semester at the university.
However that will not give you a useful answer to your current needs, so let me start by saying that "scan rate" is not and should not be a tuning parameter, except in the cases where you are using an analyzer that provides a result every given sampled time. And from your question that doesn't seem to be your case.
Second, dead time is the worst enemy of a control loop, and sampling increases the dead time by half of the sample time.
There are fast loops and slow loops. The fast ones, such as liquid flow or liquid pressure control, can have a combined dead time and lag time of about 1 to 2 seconds, so a sample time of 1 or more seconds can make the loop dead time dominant, and that will impose a severe limitation to the controller gain and therefore to its performance. On top of that the controller timing modes (Integral, Derivative, Filters) are undesirably affected by the sampling time.
And third, given the power of the current computational technology running all your control loops at a fast rate, say 500 milliseconds or quicker, will make disappear pretty much all the problems associated to sampling in process control.
As for your questions: Executing the loops less often is in general a bad practice from the control point of view, as you are only increasing the apparent dead time, which will call for a detuning of your controller anyway.
Slowing the execution time of a loop does not reduce the oscillation of any loop; it will worsen it.
A: First, you will NEVER change the dead time of a process loop simply by changing the tuning parameters of the controller. Dead time is a function of both how soon the process responds to changes to the manipulated variable and how soon the measurement sensor is able to recognize such a change.
Unfortunately, the traditional PID control loop is inadequate to properly to respond to dead time, as it is a reactive response, only responding to changes in error (SP-PV), rather than a pro-active response. So, you can fuss over scan rate or integral action all you want. Both of these are reactive solutions, and just slow down the effective response.
So, de-tuning a process control loop just slows down the controller response to match the natural response of the process.
A better strategy is to consider some pro-active solutions:
What is causing the dead time? Take time to understand the process. Dead time is the time it takes from when the input to the process changes, to when it is detected. Actually, re-locating the measurement sensor can significantly reduce the dead time.
The classical pro-active solution for dead time is the Smith Predictor, an addition to the PID algorithm where you predict the behavior of dead time response and configure this into your solution. Unfortunately, not many control algorithms offer a Smith Predictor solution.
Configure a feed-forward solution. Here, assume you predicted a response to change to a controlled variable or change in load. You can often collect this data from historical logs from your control system. Now, include this in your response. Also here, most PLCs or DCSs offer no pre-configured solutions. But, if you know and understand the cause and effect relationship, feed-forward is as simple as including an X-Y characterizer function.
For more complex processes, consider a model-predictive control (MPC) solution. Often, this requires purchase of specific MPC solution software. Depending on cost, this might be the optimal solution.
Should you like to learn more, please call or email with questions or comments.
A: PID loop is a misnomer we all use. PID is a block in a system diagram. The controller generates a signal based on the deviation from setpoint. Stability of a control loop is based on the overall transfer function between setpoint and process variable. You can have transportation lag, process dead time (use dead-time compensation). The process may not have any dead time, but control action (valve, heater, etc.) may be the source. A knowledge of each element in the loop is needed. PID may not even be the best approach. If you have a process with dead time, then you want to compensate for this in the control action.
Here again, a poorly designed process will be hard to control. I bet that if the questioner drew out the block diagram, the question would be easier to solve or even test by simulating the overall process by simple differential equations.
Control valves are the most confusing. They can have dead time, different reaction curves. The step response of the measured process variable to the control action with the loop opened (no PID at all) is needed. I would just start with the process flow sheet and instrument diagram and any trended data to help draw the loop and then build the equations from that.
Another way of answering this question is to write out the PID algorithm, and then it is seen that the terms in relation involve the "error term" three times: CO is proportional to the error, the integral of the error and the derivative of the error. Thus the time dynamics of the error term determine the stability independently of the tuning and dictate the tuning.