This Control Talk column appeared in the October 2020 print edition of Control. To read more Control Talk columns click here or read the Control Talk blog here.
Greg: This Control Talk concludes my conversation, begun last month, with Héctor Henry Torres, principal control systems engineer for Eastman Chemical Company, discussing methods for improving process performance and assessing productivity to identify improvement opportunities using statistical tools. Additionally, control techniques are suggested for improving process performance. This material and much more originated from the chapter on “Improving Process Performance” in the Process/Industrial Instruments and Controls Handbook Sixth Edition (McGraw-Hill, 2019).
Héctor, once process performance has been assessed what is the next step in improving the process?
Héctor: Once process performance has been assessed and the process performance indices have been calculated, the next step is to examine the process output tendency. That is, where is the mean located versus the intended target? How much variability exists around the actual process output? Is the variability sufficiently small to tolerate? Is the process consistent over time?
If both the lower and upper specification limits have equally important performance implications, center the process and reduce the special causes of variability to avoid violations of specifications.
Greg: Could you please describe some methods to center the process?
Héctor: Start by understanding why the specific quality characteristic is not on target. Reasons could be as simple as a decision made by the operators; they could be in a comfort zone if the process is within specs. In other cases, the operators are unhappy with the control loop performance and prefer to run it in manual, leaving the controller’s output constant. The process might be able to work for some time within specs but is not at setpoint and consequently, remains off-target. If the process is later subjected to disturbances while the PID controller is in manual, the process conditions could change enough to drive the system out of spec.
There are process variables that are measured but are not controlled. These variables could induce a disturbance that could lead the process and control loops out of target and even out of specification. There are different statistical tools that can determine the relationships between process inputs and process outputs. Through statistical analysis, several potential process inputs affecting the output are validated using graphical analyses, analysis of variance studies (ANOVA), hypothesis tests and regressions. The final list of potential key inputs is then tested in an orderly manner using design of experiments (DOE). The objective is to find the mathematical model of inputs that explain the process response and determine the optimum configuration to ensure the controlled process output is robust to upsets over the long term:
Υ = ƒ(χ1, χ2, ..., χk)
Where: Y = controlled process output (e.u.) and xk = manipulated process inputs (e.u.).
In some cases, the mathematical model is far from perfect but the statistical and graphical analysis can reveal at what level a given process variable improves stability and even at what level it ensures quality parameters are kept. This sets direction and strategy of operation by knowing exactly what process inputs to manipulate (manipulated inputs) to keep a process output at target and what process inputs could be used as feedforward signals.
By identifying the key disturbance variables one can setup a feedforward arrangement to help the PID to preemptively adjust the process, should a disturbance be observed entering the loop. I recommend that practitioners read your post, “Feedforward Control Enables Flexible, Sustainable Manufacturing,” (blog.isa.org/feedforward-control-enables-flexible-sustainable-manufacturing) which provides great insights into how to setup feedforward opportunities and techniques.
Greg: Héctor, how should one deal with process dead time and process lags when conducting a statistical analysis?
Héctor: That is a very good question, Greg. A change in the input to a process might not have an immediate effect on the measured process output. Statistical analysis does not inherently take into account the delayed and exponential time response of variables. Process knowledge is very important here; you can approach the process engineer to get an idea, or you can try to identify the process parameters using auto-tuning software to understand the system dynamics, including measurement and control valve response, and process dynamics including transport and mixing delays and heat transfer lags. When constructing your database, ensure it is formed of inputs and corresponding output data once the effect of the former is fully reflected in the process. There are statistical packages that allow you to eliminate or minimize the effect of dead time while analyzing your data.
Depending on the process variable I am interested in, I normally gather data using averages whose period can rank from minutes to hours depending on the extent of the analysis I am to perform; a shift, a week, or a stretch of months. I then later remove periods where the process is down or starting up. Working with averages helps to minimize the effect of the dead time and lags when one is trying to understand relationships. Fine correlations and reliable mathematical models require DOE.
Greg: Could you please describe some methods to reduce variability?
Héctor: Special causes of variation can result from manipulated variables overshooting, unexpected changes in rates, and control loop tuning and interaction.
When there is a change in the setpoint, the manipulated variable and controlled variable can exhibit overshoot that increases process variability. Overshoot and oscillation can be removed by retuning the PID controller. If there is no opportunity to perform an open-loop step-change test to determine process dynamics or to run an auto-tuner, a quick temporary improvement to reduce overshoot is a filter to the setpoint equal to the PID integral time or a limit to the setpoint rate-of-change. A more flexible and precise solution to meet different objectives in setpoint response is the “2 degrees of freedom” (2DOF) PID structure. This structure offers good results for achieving desired response to setpoint changes by moderating the contribution to the PID output from the proportional and derivative modes.
Changes in production rates or throughput also are a source of upsets to the process. The deviation of the process variable from the setpoint, originated by the disturbance, is the excursion of the controlled variable before any possible corrections, during the course of one total loop dead time. The rate of change of the process variable multiplied by the dead time, determines how far the controlled process output will change. In this case the PID controller, and the process stability itself, benefit from the implementation of a feedforward signal that is monitoring the actual production rate and informing the controller of a change taking place, this allows the controller to adjust its output before the disturbance entered reaches the controlled variable and induces variability. Key is to make sure the PID adjusts its output to arrive at the same time as the disturbance hits the controlled variable.
Controlled process outputs and manipulated process inputs must be considered as a “common cause” of variation. After the special causes of variation have been identified and minimized, it is necessary to assess the PID controller. The controller could be instigating rolling oscillations through the process if not properly tuned. A simple test where the PID controller output is kept constant could help determine this. If the variations decrease, it means the control valve has excessive backlash or stiction creating limit cycles when the PID is in automatic. Another possibility is that the controller is not properly tuned due to a poor methodology or changes in the loop dynamics at different conditions. Some control systems offer the possibility of enabling a gain scheduling option that deals with changes in process dynamics given a state-variable (product density, error, production rate, etc.). Lambda tuning has proven to provide a fast and stable controller response if the type of process is properly identified as being self-regulating, near-integrating, integrating or runaway. If a self-regulating process has a time-constant-to-dead-time ratio greater than four, the process is considered to be a near-integrating and must be tuned according to integrating process rules. Processes with an accelerating response are classified as runaway and must use integrating process tuning rules as well. Lambda tuning rules are well suited for highly coupled control loops and processes identified as integrating. To prevent interactions, loops need to respond within a certain time. Lambda tuning allows coordination of the controller response speed.
To summarize, reducing the sources of variability by removing the special causes and improving the short-term variations leads to a better process performance and lower rates of material being off-specification.
Greg: Could you provide some best practices for improving process performance?
Héctor: First, preferentially measure % gross yield (%GY) and % utilized capacity (%UC) over right first time as productivity indicators.
Second, partition the %UC into rework time and downtime. Foremost are improvements in %GY. Track downtime to understand opportunities to increase installed capacity.
Third, there is always a reason for quality losses. Use tools such as “The Five Whys” troubleshooting technique and “Questioning to the Void” (Kepner-Tregoe) technique for clarification of unknowns to establish the true causes of the problems.
Fourth, rather than starting many improvement projects at once, identify the top three losses to work on, correct them or minimize their effects and move onto the next ones. Consider short term projects, say two to three months’ efforts. Focus first on getting early value by identifying and implementing quick-win ideas.
Fifth, employ feedforward signals to enable the PID controller to preemptively adjust its output to minimize the effect of upsets instigated by changes in production rates or process upsets.
- Avoid writing your major accomplishments using control slangs
- Remember that sentences such as “Reduced variability of main tank temperature” means nothing to the managers
- Avoid phrases of Feedforward, MPC, APC and Fuzzy logic in your elevator speech
- Minimize the limit cycles and increase business gain in your review
- The lower you take the unit cost, the more kudos you get
- Maximize praise of manager support and inspiration
- Explain how success will minimize dead time in manager’s promotion
- Emphasize how your opportunity is manager’s opportunity
- Send Control Talk link to manager
- Above all, avoid any stiction and backlash in your review
Leaders relevant to this article: