Correcting imperfection in process models

The goal of a process model is usefulness for its purpose, not perfection

Key Highlights

  • When the model doesn’t match real process data, there are three main areas to check: Coefficient tuning, missing or incorrect physics and implementation errors.
  • Effective debugging requires both technical understanding and structured problem-solving.

Process models are useful for process design, operator training, process analytics, trouble shooting, training new engineers and many other applications. For models to be functional, they must have adequate fidelity to the process. There are several solutions based on the issues of model mismatch to the process.

“Adequate fidelity” does not mean perfection. Perfection seems impossible. The model should be sufficiently functional for its purpose.

Process models are created from two general approaches: phenomenological and empirical.

Phenomenological models

Consider process models that have a phenomenological development. Some call them mechanistic, theoretical or digital twins. The modeling starts with mechanistic concepts of the process behaviors, then translates them to algebraic or calculus or statistical mathematical embodiment. The math procedure is translated to digital code. The term model applies to each of the three stages: concept, math equations and digital code embodiment.

Phenomenological models may be of a first-principles type (simple, elementary) or a rigorous type (attempting toward an elusive perfection). In either case, phenomenological models have coefficients that are supposed to describe the process, such as reactivity, efficiency, fluid flow pressure loss factors, or ambient heat loss coefficients. If the model does not match the operating data, it may be that the presumed values of the coefficients are not correct.

Seek to adjust the model coefficients to make the model match the data. There are several approaches to this calibration of models, including heuristic tuning or least squares optimization for the model to match a batch of data. Alternately, heuristic or model-based incremental adjustment matches the model to new data as the process evolves in time.  Often these approaches are successful in correcting process-to-model mismatch.

However, the model may not include all the important phenomena. For instance, blackbody radiant losses may be important, but not included, or temperature-dependent fluid properties may be needed for adequate model fidelity to data from the process. Simple choices in the concept-to-math-model steps (such as the ideal gas law, unity activity coefficients, or other simple constitutive relationships) may need to be revisited. In these situations, the model functionality does not match the process, and adjusting model coefficient values does not fix the problem. As a second stage of model correction, search for such phenomena that are not, but should be, included in the model, starting with a reevaluation of the concepts and their translation to the model.

This third topic should be detected and corrected in the translation of math to code stage. The digital implementation of the model may have a coding error or a procedure that causes the mismatch to reality. Search for such translation of model to code techniques that might be the source of the mismatch.

Get your subscription to Control's tri-weekly newsletter.

Missing or erroneous representation of phenomena and erroneous concept to digital implementation are frequently part of the initial model development and validation. They require both an understanding of how to identify process model mismatch within process noise and an effective trouble-shooting approach to find and fix the problem. To validate or calibrate a model my book “Nonlinear Regression Modeling for Engineering Applications” applies. To guide the identification of model fixes, the Kepner & Tregoe book, “The Rational Manager” is a trouble-shooting guide. 

Empirical models

Many models, or parts of models, are empirical. They are developed by best fitting a generic mathematical function (such as a power series, or a neural network) to historical data. But the process may have changed since it generated the data. If so, the empirical model needs to be re-generated from new and relevant data.

Alternately, the data might still be a valid description of the process behavior, but the issue is that the empirical model is being extrapolated beyond its representation of the data. If so, reformulate the functional basis in the model so that the generic function matches what is theoretical expected, or collect data in the appropriate range for model refitting.

Most of my experience is in chemical processing, developing models for control algorithms.  I've been very pleased with these approaches, which are also posted in the Modeling-Simulation section of my web site www.r3eda.com.

About the Author

R. Russell Rhinehart

Columnist

Russ Rhinehart started his career in the process industry. After 13 years and rising to engineering supervision, he transitioned to a 31-year academic career. Now “retired," he returns to coaching professionals through books, articles, short courses, and postings to his website at www.r3eda.com.

Sign up for our eNewsletters
Get the latest news and updates