Deadtime Dominance - Sources, Consequences and Solutions

Dec. 1, 2016

If the total loop deadtime becomes larger than the open loop time constant, you have a deadtime dominant loop. There is a lot of misunderstanding posed by this difficult challenge including a lot of prevalent myths.  Here we look at how we get into this situation, what to expect and what to do.

If the total loop deadtime becomes larger than the open loop time constant, you have a deadtime dominant loop. There is a lot of misunderstanding posed by this difficult challenge including a lot of prevalent myths.  Here we look at how we get into this situation, what to expect and what to do.

Deadtime is the ultimate limit to loop performance. A controller cannot correct for a disturbance until it starts to see a change in the process variable and can start to change the manipulated variable (e.g., flow). If the deadtime was zero, the controller gain could be set as large as you want and the peak and integrated errors made as small as you want provided there is no noise. The PID gain could theoretically approach infinity and lambda approach zero. Without deadtime, I would be out of job.

Sources

Fortunately, deadtime dominant loops where the deadtime is much larger than the open loop time constant, is usually relegated to a few loops for processes involving liquids and gases.  Deadtime dominance typically occurs due to transportation delays (deadtime = volume/flow), cycle times of at-line analyzers (deadtime = 1.5 x cycle time, or improper setting of PID execution rates or wireless update rates (dead time = 0.5 execution or update rate time interval). Deadtime dominance due to transportation delay is more common in the mining industry and other processes that are primarily dealing with solids.

Large transportation delays result from conveyors or feeders or extruders where the feed of solids are manipulated at the inlet and the quality or quantity is measured at the outlet (discharge) assuming the sensor lag is much less than the transportation delay (i.e., total loop deadtime). The use of a gravimetric feeder can eliminate the delay for feed flow control making a much more precise and responsive loop.  A kiln has a temperature sensor lag that creates an open loop time constant that is usually larger than the transportation delay. However, the use of an optical sensor (e.g., pyrometer) for temperature control eliminates this lag possibly making the loop deadtime dominant.

In vessels and pipes where there is no axial mixing, also known as back mixing due to an agitation or recirculation or sparging, there is no mixing up and down in the vessels and mixing forward and backwards in the pipe. We call this plug flow. There may be radial mixing, to make the concentration or temperature in a cross section of the vessel or pipe more uniform or to keep solids dispersed. Radial mixing does not reduce transportation delays whereas axial mixing eliminates plug flow decreasing transportation delays. Any remaining delay from axial mixing depends upon the degree of mixing. For vessels with good geometry and axial agitators, the delay is approximately the turnover time that is the volume divided by the summation of the agitator pumping rate, recirculation rate and feed rate.      

Deadtime dominance usually doesn’t occur in temperature loops except for inline systems, such as plug flow liquid or polymer reactors where there is negligible heat transfer lag in the equipment or sensor.  

Consequences

For a deadtime dominant system, the peak error for a step load disturbance approaches the open loop error (error if loop is in manual). Better tuning and feedback control algorithms cannot do much to improve the peak error. There can be a significant improvement in the integrated error by better tuning and feedback algorithms for a load disturbance but the ultimate limit to the integrated error is the peak error multiplied by the deadtime.

Solutions

The best solution is to eliminate the deadtime that is the source of deadtime dominance by better process, equipment or automation system design. The next best solution is feedforward control given a reliable timely measurement of the disturbance. The feedforward measurement does not need to be extremely accurate. An error of 20% will still result in an 80% improvement in integrated error if the feedforward timing is right. The correct timing cannot be achieved when the correction by feedforward has a deadtime greater than the deadtime in terms of the actual disturbance affecting the process variable that is the controlled variable. There is no compensation for too much deadtime in the feedforward path. A feedforward correction arriving too late must have the feedforward gain reduced. If the feedforward correction arrives after most of the feedback correction has been made, a second disturbance is created making the response worse than no feedforward. Even more disruptive is a feedforward correction that arrives too soon due to deadtime in the feedforward path being less than the deadtime in the disturbance path because this creates inverse response confusing the feedback controller.  However, this situation is readily corrected by inserting a deadtime in the feedforward (FF) signal equal to the disturbance path deadtime minus the feedforward path deadtime. A lead lag is used as well in the FF dynamic compensation where the FF lead time is set equal to the lag in the feedforward path and the FF lag is set equal to the lag in the disturbance path.

The solution cited for deadtime dominant loops is often a Smith Predictor deadtime compensator (DTC) or Model Predictive Control (MPC). There are many counterintuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for an overestimate besides an underestimate of the deadtime. The consequences for the DTC and MPC are much greater for an overestimate. For a conventional PID, an overestimate of the deadtime just results in more robustness and slower control.  For a DTC and MPC, an overestimate of deadtime by as little as 25% can cause a big increase in integrated error and an erratic response.

A better general solution first advocated by Shinskey and now particularly me is to simply insert a deadtime block in the PID external reset feedback path (PID+TD) with the deadtime updated to be always be less slightly less than the actual total loop deadtime. Turning on external reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes.  This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external reset feedback should be the result of a positive feedback implementation of integral action.

There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime.  The ultimate limit is still present because we are not making deadtime disappear.

If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable. The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that can have started at any time in between updates but only see the error measured after the update.

For more on the sensitivity to both under and over estimates of the total loop deadtime and open loop time constant, see the ISA books “Models Unleashed” pages 56-70 for MPC and “Good Tuning : A Pocket Guide 4th Edition” pages 118-122 for DTC. For more on the enhanced PID, see the July-Aug 2010 InTech article “Wireless: Overcoming challenges in PID control & analyzer applications” and the July 6, 2015 Control Talk Blog “Batch and Continuous Control with At-Line and Offline Analyzers Tips”.

If you can make the deadtime zero, let me know so I can retire.

About the Author

Greg McMillan | Columnist

Greg K. McMillan captures the wisdom of talented leaders in process control and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams and Top 10 lists.