Here is the Fifth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
- Simple computation of a process variable rate of change with minimal noise and fast updates enable PID and MPC to optimize batch profiles. Many batch profiles have a key process variable that responds only in one direction. A one directional response occurs for temperature when there is only heating and no cooling or vice versa. Similarly, a one direction response occurs for pH when there is only a base and no acid flow or vice versa. Most batch composition responses are one directional in that the product concentration only increases with time. Integral action assumes that the direction of a process variable (PV) can be changed. Integral action can be turned off by choosing a PID structure of proportional-only (P-only) or proportional-derivative (PD). Integral action is an inherent aspect of MPC so unless some special modifications are employed, MPC is not used. A solution that enables both MPC and PID control opening up optimization opportunities is to translate the controlled variable (CV) to a PV rate of change (ΔPV/Δt). The new CV can change in both directions and the integrating response of a batch PV is now a self-regulating response of a batch CV with a possible steady state (constant slope). Furthermore, the CV is now representative of the batch profile slope and the profile can be optimized. Typically, a steep slope for the start and a gradual slope for the finish of a batch is best. The rate of change calculation simply passes the PV through a dead time block and the output of the block that is the old PV is subtracted from the input of the block that is the new PV. This change in PV is divided by the block dead time chosen to maximize the signal to noise ratio. For much more on feedback control opportunities for batch reactors see the Control feature article “Unlocking the Secret Profiles of Batch Reactors”.
- Process variable rate of change computation identifies a compressor surge curve and actual occurrences of surge. A similar computation detailed for batch control can be used to identify a surge point by realizing it occurs when the slope is zero indicated by the change in discharge pressure (ΔP) divided by the change in suction flow (ΔF) becomes zero (ΔP/ΔF =0). Thus, the operating point on a characteristic curve can be monitored when the there is a significant rate of change of flow by dividing the change in discharge pressure by the change in suction flow using dead time blocks to create an old PV that is subtracted from the new PV with the dead time parameter again chosen to maximize the signal to noise ratio. Approaches to the surge point as a result of a decrease in suction flow can be identified by a slope that becomes very small realizing seeking to see a slope that is zero is too late. At a zero slope, the process is unstable and the suction flow will jump to a negative value in less than 0.06 seconds indicating surge. A PV rate of change (ΔPV/Δt) calculation described for batch control can be used to detect and count surge cycles, but the dead time parameter setting must be small (e.g., 0.2 seconds). The detection of surge can be used to trigger an open loop backup that will prevent additional surge cycles. See the Control feature article “Compressor surge control: Deeper understanding, simulation can eliminate instabilities” for much more enlightenment on the very detrimental and challenging dynamics of compressor surge response and control.
- Simple future value computation provides better operator understanding, batch end point prediction and full throttle setpoint response. The same calculation of PV rate of change multiplied by an intelligent time interval and added to the current PV can provide immediate updates as to the future value of the PV with a good signal to noise ratio. The time interval should be greater than the total loop dead time since any action taken at that moment by an operator or control system, does not have an effect seen until after the total loop dead time. Humans don’t understand this and expect to see in a few seconds the effect of changes made. This leads to successive actions that are counter product and PID tuning that tends to emphasize integral action since it is always driving the output in the direction to correct and difference between the setpoint (SP) and PV even if overshoot is eminent. For more on the opportunities see the Control Talk Blog “Future Values are the Future” and the Control feature article “Full Throttle Batch and Startup Response”.
- Process Analytic Technology (PAT) opportunities are greater than ever. The primary focus of the PAT initiative by the FDA is to reduce variability by gaining a better understanding of the process and to encourage pharmaceutical manufacturers to continuously improve processes. PAT is defined in Section IV of the “Guidance for Industry” as follows: The Agency considers PAT to be a system for designing, analyzing, and controlling manufacturing through timely measurements (i.e., during processing) of critical quality and performance attributes of raw and in-process materials and processes, with the goal of ensuring final product quality. It is important to note that the term analytical in PAT is viewed broadly to include chemical, physical, microbiological, mathematical, and risk analysis conducted in an integrated manner. The goal of PAT is to enhance understanding and control the manufacturing process, which is consistent with our current drug quality system: quality cannot be tested into products; it should be built-in or should be by design.” New and improved on-line analyzers include dissolved carbon dioxide help ensure good cell conditions, turbidity and dielectric spectroscopy offer measurement of cell concentration and viability and at-line analyzers such as mass spectrometers provide off gas concentration for computing oxygen uptake rate (OUR), and the Nova Bioprofile Flex can provide fast and precise analysis of concentrations of medium components such as glucose, lactate, glutamine, ammonium, sodium, and potassium besides cell concentration, viability, size and osmolality. The Aspectrics encoded photometric NIR can possibly be calibrated to measure the same components plus possibly indicators of weakening cells. Batch end point and profile optimization can be done. The digital twin described in the Control feature article “Virtual plant virtuosity“ can be a great asset in increasing process understanding and developing, testing and tuning process control improvements so that implementation is seamless making the documentation for management of change proactive and efficient. The digital twin kinetics for cell growth and product formation have been greatly improved in the last five years enabling high fidelity bioreactor models that are easy to fit largely eliminating the need for proprietary research data. Thus, today bioprocess digital twins make opportunities a reality described in the Bioprocess International, Process Design Supplement feature article “PAT Tools for Accelerated process Development and Improvement”.
- Faster and more productive data analytics by use of digital twin. Nonintrusive incredibly more relevant inputs can be found and an intelligent and extensive Design of Experiments (DOE) conducted by use of the digital twin without affecting the process. Process control depends on identification of the dynamics and changes in process variables for changes in other process variables most notably flows over a wide operating range due to nonlinearity and interactions. Most plant data used for developing principal component analysis (PCA) and predictions by partial least squares (PLS) does not show sufficient changes in the process inputs or process outputs and does not cover the complete possible operating range especially startup and abnormal conditions when data analytics is most needed. First principal models are exceptional at identifying process gains and can be improved to enable the identification of dynamics by including valve backlash, stiction and response times, mixing and measurement lags, transportation delays, analyzer cycle times, and transmitter and controller update rates. The identification of dynamics is essential for dynamic compensation of inputs for continuous processes so that a process input change is synchronized with a corresponding process output change. Dynamic compensation is not needed for prediction of batch end points but could be useful for batch profile control where the translation of batch component concentration or temperature to a rate of change (batch slope) gives a steady state like a continuous process.
- Better paring of controlled and manipulated variables by use of digital twin. The accurate steady state process gains of first principal models enable a comprehensive gain array analysis that is the key tool for finding the proper pairing of variables. Integrating process gains can be converted to steady state gains by computing and using the PV rate of change via the simple computation described in item 41.
- Online process performance metrics for practitioners and executives enabling justification and optimization of process control improvements by use of digital twin. Online process metrics computing a moving average metric of process capacity or efficiency for a shift, batch, month and quarter can provide the analysis and incentive for operators, process and automation engineers, maintenance technicians, managers and executives. The monthly and quarterly analysis periods are of greatest interest to people making business decisions. The shift and batch analysis periods are of greatest use to operators and process engineers. All of this is best developed and tested with a digital twin. For more on the identification and value of online metrics, see the Control Talk column “Getting innovation back into process control”.
- Nonintrusive adaptation of first principle models by MPC in digital twin can be readily done. It is not commonly recognized that the fidelity of a first principal model is seen in how well the manipulated flows in the digital twin match those in the plant. The digital twin has the same controllers, setpoints and tuning as in the actual plant. Differences in the manipulated flows trajectories for disturbances and operating point changes are indicative of a mismatch of dynamics. Differences in steady state flows are indicative of a mismatch of process parameters. In either case, a MPC whose setpoints are the plant steady state or rate of change flows and whose controlled variables are the digital twin steady state or rate of change flows, can be manipulate process or dynamic parameters to adapt the model in the digital twin. The models for the MPC can be identified by conventional methods using just the models in the digital twin. The MPC ability to adapt the model can be tested and then actually used via the digital twin inputs of actual plant flows. For an adaptation of bioreactor model example, see Advanced Process Control Chapter 18 in A Guide to Automation Body of Knowledge Third Edition.
- Realization that PID algorithms in industry seldom use Parallel Form. Not sure why but most text books and professors show and teach the Parallel Form. In the Parallel Form, the proportional mode gain only affects the proportional mode resulting in an integral gain and derivative gain for the other modes. Most industrial PIDs use a Form where the proportional mode tuning setting (e.g., gain or proportional band) affects all modes. The integral tuning setting is either repeats per unit time (e.g., repeats per minute) or time (e.g., seconds) and the derivative setting is a rate time (e.g., seconds). The behavior and tuning settings are quite different severely reducing the possible synergy between industry and academia.
- Realization that PID algorithms in industry seldom use engineering units. Even stranger to me is the common misconception by professors that the PID algorithm works in engineering units. I have only seen this in one particular PLC and the MeasureX sheet thickness control system. In nearly all other PLC and DCS control systems, the algorithm works in percent of controlled variable and manipulated variable signals. The effect of valve installed flow characteristic and measurement span has a straightforward effect on valve and measurement gains and consequently the open loop gain and PID tuning. If a PID algorithm works in engineering units, these straightforward effects are lost and simply changing the type of engineering units (e.g., going from lbs per minute to cubic feet per hour) can have a huge effect on tuning a PID working in engineering units (no effect on a PID working in percent). This disconnect is also severely reducing the possible synergy between industry and academia.