# Missed Opportunities in Process Control - Part 6

Here is the Sixth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.

1. Add small amounts of dissolved carbon dioxide (DCO2) and conjugate salts to make computed titration curves match laboratory titration curves. The great disparity between theoretical and actual titration curves is due to conjugate salts and incredibly small amounts of DCO2 from simple exposure to air and a corresponding amount of carbonic acid created. Instead of titration curve slopes and thus process gains increasing by 6 orders of magnitude as you go from 0 to 7 pH for strong acids and strong bases, in reality the slope increases by 2 orders of magnitude, still a lot but 4 orders of magnitude off. Thus, control system analysis and supposed linearization by translation of the controlled variable from pH to hydrogen ion concentration by the use of theoretical equations for a strong acid and strong base is off by 4 orders of magnitude. I made this mistake early in my career (about 40 years ago) but learned at the start of the 1980s that DCO2 was the deal breaker. I have seen the theoretical linearization published by others about 20 years ago and most recently just last year.  For all pH systems, the slope between 4 and 7 pH is greatly moderated due to the carbonic acid pKa = 6.35 at 25 degrees Centigrade. The titration curve is also flattened within two pH of logarithmic acid dissociation constant (pKa) of an acid or base that has a conjugate salt. To match computer generated titration curves to laboratory titration curves, add small amounts DCO2 and conjugate salts as detailed in the Chemical Processing feature article “Improve pH control”.
2. Realize there is a multiplicative effect for biological process kinetics that creates restrictions on experimental methods to analyze or predict cell growth or product formation. While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of  factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to 1 and as constant as possible except for the one of interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly 1.
3. Take advantage of the great general applicability and ease of parameter adjustment in Michaelis-Menten equations for the effect of concentrations and Convenient Cardinal equations for the effect of temperature and pH on biological processes. The Mimic bioreactor model in a digital twin takes advantage of these breakthroughs in first principle modeling. For temperature and pH, Convenient Cardinal equations are used where the optimum temperature for growth and production phases is simply the temperature or pH setpoint including any shifts for batch phases. The minimum and maximum temperatures complete the parameter settings. This is a tremendous advancement to traditional uses of Arrhenius equations for temperature and Villadsen-Nielsen equations for pH that required parameters not readily available set with a precision to the sixth or seventh decimal place. Generalized Michaelis-Menten equations shown to be useful for modeling intracellular dynamics can model the extracellular limitation and inhibition effects of concentrations. The equations provide a link between macroscopic and microscopic kinetic pathways. If the limiting or inhibition effect is negligible or needs to be temporarily removed, the limitation and inhibition parameter is simply set to 0 g/L and 100 g/L, respectively. The biological significance and ease of setting parameters is particularly important since most kinetics are not completely known and what is defined can be quite subject to restrictions on operating conditions. These revolutionary equations enable the same generalized kinetic model to be used for all types of cells. Previously, yeast cells (e.g., ethanol production), fungal cells (e.g., antibiotic production), bacterial cells (e.g., simple proteins), and mammalian cells (e.g., complex proteins) had specialized equations developed that did not generally carry over to different cells and products.
4. Always use smart sensitive valve positioners with good feedback of actual position tuned with a high gain and no integral action on true throttling valves (please read the following despite its length since misunderstandings are pervasive and increasing). A very big and potentially dangerous mistake persists today from a decades old rule that positioners should not be used on fast loops. The omission of a sensitive and tuned valve positioner can increase limit cycle period and amplitude by an order of magnitude and severely jeopardize rangeability and controllability. Without a positioner, some valves may require a 25% change in signal to open meaning that controlling below 25% signal is unrealistic. As a young lead  I&E engineer for the world’s largest acrylonitrile plant back in the mid-1970s, I used the rule about fast loops. The results were disastrous. I had to hurriedly install positioners on all the loops during startup. A properly designed, installed and tuned positioner should have a response time of less than 0.5 seconds. A positioner gain greater than 10 is sought with rate action added when offered. Positioners developed in the 1960s were proportional only with a gain of 50 or more and a high sensitivity (0.1%). Since then spool positioners with extremely poor sensitivity (2%) have been offered and integral action included and even recommended in some misguided documents by major suppliers. Do not use integral action in the positioner despite default settings to the contrary. A volume booster can be added to the positioner output to make the response time faster. Using a volume booster instead of a positioner is dangerous as explained in next point. If you cannot achieve the 0.5% response time, you have something wrong with type of valve, packing, positioner, installation and/or tuning of the positioner and is not a reason for saying that you should not use positioners on fast loops. An increasing threat ever since the 1970s has been on-off valves posing as throttling valves. They are much less expensive, in the piping spec and have much tighter shutoff. The actuator shaft feedback may change even though the actual ball or disk has not changed for a signal change of 8% or more. In this case, even the best positioner is of little help since it is being lied to as to actual position. Valve specifications have an entry for leakage but typically have nothing on valve backlash, stiction, and response time and actuator sensitivity. I can’t seem to get even a discussion started as to how to get this changed and how rangeability and controllability are so adversely affected. If you need a faster response or are stuck with an on-off valve, then you need to consider a variable frequency drive with a pulse width modulated inverter (see point 5 in part 4 of this series). Also be aware that theoretical studies based solely on process dynamics are seriously flawed since most fast loops, sensors, transmitters, signal filter, scan and execution times are a larger source of time constants and dead time than the actual process making the loop much slower than what is shown in studies based on process response as noted in point 9 in part 1. For much more on how to deal with this increasing threat read the Control articles “Is your control valve an imposter?” and “How to specify valves and positioners that don’t compromise control”. If you are going to do a study by simulation showing the performance with and without a valve positioner on a fast loop ignoring the potentially greater than factor of10 increase in limit cycles, you need to include a process time constant that is at least 0.1 seconds for liquid flow and more than 1 second for gas flow, a transmitter damping setting time constant of 0.2 seconds, a lag time of more than 1 second for gas impulse lines, any sensor lag time, dead time that is ½ the scan rate for digital devices, a valve time constant of at least 0.25 seconds and a valve dead time that is the dead band divided by a maximum rate of change of the valve signal where the dead band is more than 10 times larger for a valve without a positioner (e.g., 5%). Also, the positioner must be tuned with proportional only control with a gain of at least 10. You should have a stiction of at least 5% above 10% open and at least 10% below 10% open. Good luck proving the old rule is right even if you have a small valve so you don’t need a volume booster.
5. Put a volume booster on output of positioner with a bypass valve opened just enough to make the valve stroking time much faster recognizing that replacing a positioner with a booster poses a major safety risk. Another decades old rule said to replace a positioner with a booster on fast loops. For piston actuators, a positioner is required to work at all. For diaphragm actuators, a volume booster instead of a positioner creates a positive feedback (flexure of diaphragm changes volume and consequently pressure seen by booster high outlet port sensitivity) causing a fail open butterfly valve to slam shut from fluid forces on the disk. This has happened to me on a huge compressor and was subsequently avoided on the next project when I showed I could position 24 inch fail open butterfly valves for furnace pressure control by simply grabbing the shaft due to positive feedback from booster and diaphragm actuator combination. Since properly sized diaphragm actuators generally have an order of magnitude better sensitivity than piston actuators and the operating pressure of newer diaphragm actuators has been increased, diaphragm actuators are increasingly the preferred solution.
6. Understand and address the reality that processes have either a dead time dominant or balanced self-regulating, near-integrating, true integrating, or runaway response. Most of the literature studies balanced self-regulating processes where the process time constant is about the same size as the process dead time. Some studies address dead time dominant self-regulating processes where the dead time is much greater than the process time constant. Dead time dominant processes are less frequent and mostly occur when there is a large dead time from process transportation delay (e.g., plug flow volumes or conveyors) or analyzer sample and cycle time (see points 1 and 9 in part 3 on how to address these applications). The more important loops tend to be near-integrating where the process time constant is more than 4 times larger than the process dead time, true integrating where the process will continually ramp when the controller is in manual and runaway where the process deviation will accelerate when the controller is in manual. Continuous temperature and composition loops on volumes with some degree of mixing due to reflux, recycle or agitation have a near-integrating response. Batch composition and temperature have a true integrating response. The runaway response occurs in highly exothermic typically polymerization reactors but is never actually observed because it is too dangerous to put the controller in manual during a reaction long enough to see much acceleration. Most gas pressure loops and of course, nearly all level loops have an integrating response.  It is critical to tune PID controllers on near-integrating, true integrating and runaway processes with maximum gain, minimum reset action and maximum rate action so that the PID can provide the negative feedback action missing in these processes. As a practical matter, near-integrating, true integrating, and runaway processes are tuned with integrating process tuning rules where the initial ramp rate is used to estimate an integrating process gain.
7. Maximize synergy between chemical engineering, biochemical engineering, electrical engineering, mechanical engineering and computer science. All of these degrees bring something to the party for a successful automation system implementation. The following simplification provides some perspective: chemical and biochemical engineers offer process knowledge, electrical engineers offer control, instrumentation and electrical system knowledge, mechanical engineers offer equipment and piping knowledge, and computer scientists offer data historian and industrial internet knowledge. All of these people plus operators should be involved in process control improvements and whatever is expected from the next big thing (e.g., Industrial Internet of Things, Digitalization, Big Data and Industry 4.0). The major technical societies especially AIChE, IEE, ISA, and ASME should see the synergy of exchange of knowledge rather than the current view of other societies being competition.
8. Identify and document justifications to develop new skills, explore new opportunities and innovate. Increasing emphasis on reducing project costs is overloading practitioners to the point they don’t have time to attend short courses, symposiums or even online presentations. Contributing factors are loss of expertise from retirements, fear of making any changes, and present day executives who have no industry experience and are focused on financials, such as reducing project expenditures and shortening project schedules. At this point practitioners must be proactive and do investigation on their own time of opportunities and process metrics. Developing skills with the digital twin can be way of defining and showing associates and management the type and value of improvements as noted in all points in part 5. The digital twin with demonstrated key performance indicators (KPI) showing value of the increases in process capacity or efficiency, plus data analytics and Industry 4.0 can lead to people teaching people, eliminating silos, spurring creativity and deeper involvement, nurturing a sense of community and common objectives, and connecting the layers of automation and expertise so everybody knows everybody. To advance our profession, practitioners should seek to publish what is learned, which can be done generically without disclosing proprietary data.
9. Use inferential measurements periodically corrected by at-line analyzers to provide fast analytical measurements of key process compositions. First principle models or experimental models identified by model predictive control or data analytics software can be used to provide immediate composition measurements with no delay associated with process sample system and analyzer cycle time. The inferential measurement result is synchronized with an at-line analyzer result by the insertion of a dead time equal to sample transportation delay plus 1.5 times the analyzer cycle time. A fraction, usually less than 0.5 of the difference between the inferential measurement and at-line analyzer result after elimination of outliers, is added to correct the inferential measurement whenever there is an updated at-line analyzer result.
10. Use inline analyzers and at-line analyzers whose sensors are in the process or near the process, respectively. There are many inline sensors available today (e.g., conductivity, chlorine, density, dissolved carbon dioxide, dielectric spectroscopy, dissolved oxygen, focused beam reflectance, laser based measurements, pH, turbidity, and viscosity). The next best alternative is an at-line analyzer located as close as possible to the process connection to minimize sample transportation delay. The practice of locating all analyzers in one building creates horrendous dead time. An example of an innovative fast at-line analyzer capable of extensive sensitive measurements of components plus cell concentration and size for biological processes is the Nova Bioprofile Flex. Chromatographs, near infrared, mass spectrometers, nuclear magnetic resonance, and MLT gas analyzers using a combination of non-dispersive infrared, ultraviolet and visible spectroscopy with electrochemical and paramagnetic sensors have increased functionality and maintainability.