Greg: Dynamic modeling has been the most important technology in my 50-year career. It’s the source of all my deeper process control knowledge. My work with compressor surge modeling in 1976 opened the door for me to go from being an instrument design and construction engineer to a process modeling and control specialist in Monsanto’s Engineering Technology (ET), where I worked with brilliant experts, who were proficient in steady-state and dynamic modeling. The steady-state modeling software we developed was donated to the Federal Aspen Research Institute, which eventually led to AspenTech.
With ET, I used models to develop, prototype and test innovations that improved process performance by employing better control strategies, valves, PID algorithms and measurements. This was particularly important for the more challenging applications such as bioreactor control, centrifuge control, compressor control, dryer control, exothermic reactor control, fermenter control, furnace pressure control and pH control. Also, modeling the fundamentally different dynamics of self-regulating, integrating and runaway processes enabled me to extrapolate principles to address the possibilities for nearly all applications.
Greg Shinskey, whose publications are my greatest resource, and many of the people I’ve written about in these Control Talk columns use models to explore the capabilities of PID control. Some of my closest associates, who are experts in distillation control, use steady-state models to define relative gain matrices, and find the best tray for column temperature control. Many of my publications and the sections I wrote for the ISA-TR5.9-2023 PID, Algorithms and Performance Technical Report are a reflection of what I learned via modeling.
It's disappointing that process control engineers aren't given time by management to invest in using modeling to improve process performance. I personally regret not making the extra effort five years ago to help engineers interested in modeling to actively seek and start their potential applications. Engineers are increasingly stressed to the limit against tightened project schedules and budgets. How can we motivate and train engineers to turn this problem around, and let them promote the value of models?
To broaden our horizons and better understand how to make the most of these opportunities, I asked José María Ferrer, who has more than 25 years of experience in the dynamic simulation and control of hydrocarbon processes. He began his career as a process control engineer at Dow Chemical in 1995, and joined Hyprotech as EMEA Operator Training System (OTS) business development leader in 2001. In 2004, as AspenTech’s senior consultant, he executed several dynamic simulation projects applied in new areas such as emergency shut down (ESD) verification and advanced process control (APC). He’s been developing the APC business in Europe, and executing several dynamic simulation projects to support APC implementations and new Aspentech OTS offerings. In 2010, he joined Inprocess to offer dynamic simulation services and launch the OTS business. Since 2014, he’s been developing and teaching a new simulation training course specially tailored to process control engineers. Since 2018, he’s also been analyzing using online simulations to support operations in anomaly detection, and in 2019, he began leading projects to exploit simulation for offline and online applications.
José, what can we do to educate and motivate process control engineers to take advantage of modeling?
José: We can make them aware of the value they’ll get from placing variable economic compensation based on controller performance. I remember my first day at Hyprotech. I was given a laptop and a two-hour tour of the simulation tool building a dynamic model. Since then, I’m still amazed at the capabilities of dynamic and steady-state process simulators. Also, there’s still a lot of thinking in silos, where process simulation is only for process engineers (designs/revamps mainly in steady state), and automation/control departments should not touch that simulation territory. This leaves the “dynamic simulation” in no-man’s land. Building high-quality dynamic models is a time-consuming task and requires some experience. You must choose the right simulation tools and understand the limitations.
We need to explain that it’s one thing to build a dynamic model and another to use it. I don´t see all process control engineers at a company as model builders; maybe it’s only a small group. The rest can be true users of the built models. If you don´t have that small group, you can contract experts to make them for you. We’re constantly doing simulation projects for advanced and basic process control departments when things become complex.
Greg: What are the differences between steady-state and dynamic models, and what are the uses and cases?
José: Steady-state simulation (where time doesn´t exist, and there are no inventories or accumulation) started in 1950, using machine code developed for limited-scope and single-use, Nowadays, it’s large-scope and multi-use, but mainly for process design.
Dynamic simulation (where you have time, inventories and controllers as in a real plant) followed the same path but 10 years later due to greater computational requirements. I’m truly amazed by the capabilities of personal computers (PC) today; how large, how fast, and how detailed you can build a dynamic model. They’re the same PCs that run flight simulators. You can compare Microsoft’s Flight Simulator software from the 1990s to today’s versions. The last 30 years have been amazing for both flight and process simulators.
Beyond design, steady-state models are valuable in control areas, especially for developing inferentials, column composition profile dynamics, optimal sensor location, APC, deep reinforcement learning (DRL) benefit estimation, column feed locations and optimal targets. There’s also value for addressing the impact of controlled variables (CV), manipulated variables (MV) and disturbance variables (DV) via CV/MV and CV/DV gains for the whole operational envelope.
Beyond OTSs and instrument control and safety system (ICSS) checkout, dynamic models are valuable in control areas for benchmarking alternative control schemes, tuning PIDs in complex processes (for example, slug flows), obtaining plant curve responses for all plant states for APC/DRL multivariable controllers, tuning and testing APC/DRL, and getting online models for inferentials.
When I was at AspenTech, I tried to create a three-day course to teach steady-state and dynamic modeling to process control engineers, who hadn’t previously seen simulations. My managers asked for a business case for such a course. In 2010, at Inprocess it only took me one minute to convince my boss to create it.
Greg: I’ve used the term “virtual plant,” but was recently made aware of the preferred “digital twin” term that’s aligned with the fervor over “digitalization.” For me, the key feature is the ability to use the actual control system configuration and operator interface, eliminating the difficult and perilous task of recreating these by interfacing or downloading actual software. This inherently makes empowering new PID features, data analytics tools and model-predictive control readily available. The cost of a digital twin is an order of magnitude less than the older modeling technology for OTS that required programing the algorithms and interfaces besides the simulation. These systems could rarely be used for process control improvement because the control capability programmed was very limited or wrong. Today’s digital twins offer incredible opportunities to find and quantify opportunities noted in my earlier Control Talk column “Simulation breeds innovation” and my Control articles “Virtual plant virtuosity” and “Bioreactor control breakthroughs: Biopharma industry turns to advanced measurements, simulation and modeling.” For my take on building and using key performance indicators (KPI) and inferential measurements, see “Control Talk: Top of the bottom line.”
José, what do you see as the value of digital twins?
José: I don´t like the phrase “digital twin.” Nowadays, people use it to name almost everything from a three-dimensional model to a statistical model to a mechanical model, even an OTS. It’s applied, not only to a processing plant, but also to an airplane, airport, building, wind turbine or human heart.
To avoid confusion, and talk about the process industries we work in (mainly oil and gas, refining and chemicals), I prefer to call this topic real-time simulation (RTS). This is a detailed, dynamic-simulation model running online and in synchrony with the real plant. Usually, this is the second life of the so-called multi-purpose dynamic simulator (MPDS), which covers dynamic studies, early-OTS, control narrative verification, operating procedures development, ICSS checkout and direct-connect OTS.
I remember the first time that I ran a dynamic model of a C3 splitter against one week at one minute, sampling historical process data using a simple Excel macro, and obtained a very good match with the bottom online analyzer. I saw the potential of using it in real-time and shared my thoughts within the company, but surprisingly my managers were not impressed.
If we talk about the value of digital twins, the first thoughts that come to mind include ensuring your plant is running as it should run every second, and if it’s not, detecting this immediately. This raises the ability of early detection and diagnosis of small anomalies, which normally grow with time. Then, you have extra value gained from virtual instrumentation (pressure, temperature, flow), equipment KPIs, emissions KPIs, what-if analyses for present time of any (blue) parameter, and historical model repositories. The list goes on and on.
Greg: Some closing thoughts from this discussion include asking how can process engineers achieve dynamic simulation speedup for slow-continuous and batch processes? For large distillation columns, the time to steady state can be 12 hours or more. For bioreactors to produce modern biologics, the batch cycle time can be 12 days or more. The controller should have the same speed up, otherwise special care must be taken to reduce dead time and modify scales and tuning.
For speed up of five to 10 times real-time, models that exist in the virtual plant controller can have the same speedup factor as the control system and operator interface. This should suffice for slow-continuous operations. For much faster speedups needed by bioreactors, media speedup factors can be 20 times real-time, and kinetic speedup factors can be 10 or more. The total speedup for media is the product of the two factors and is 200 or more.
The capacity of final control elements (control valves and variable speed drives) and flow-measurement spans must be increased by the kinetic speedup factor, so the flow controller tuning doesn’t change much. However, the primary time constant will decrease and the integrating process gain will increase by the total speedup factor. The total loop dead time must be decreased proportionally to the speedup factor for composition, dissolved oxygen, pH and temperature loops to avoid a large disruption to the tuning needed.
For more on bioreactor virtual plant speed up, see the ISA book, New Directions in Bioprocess Modeling and Control Second Edition.