CT1901-Feat2-IIoT-cover-470-compressor

At the IIoT crossroads

Feb. 12, 2019
A look from the trenches at digitalization, big data and Industry 4.0
We are at a crucial junction. Yogi Berra famously said, “When you come to a fork in the road, take it.” When it comes to IIoT, we will, and there may be no looking back if we continue on what seems to be the more appealing path. Judging by the number of articles and even university programs with IIoT in the title, it’s being regarded as not only as the next best thing, but perhaps the all-time best thing and the only thing. It’s interesting that I found the content of one online university program and article with IIoT in the title actually had nothing much to do with IIoT. Here, we try to address some of the practical issues of IIoT, digitalization, big data and Industry 4.0 to give realistic and useful guidance.

Executives may not understand process control, but can relate to information technology (IT) because their world is governed by IT. They may be thinking, why do we need all those engineers in hardhats? Fortunately, we see them asking experienced automation professionals in the ISA Mentor program to take IIoT on the right road. To see how important this is, let’s learn from the past.

IIoT is the future, one way or another. The following stories are just a warning to make sure we take the right road. The mistakes I saw led to many of the later recommendations for a successful future with IIoT, where engineers and technicians are empowered and enlightened. We are at the crossroads.

Lessons from the past

I survived the era of expert systems, neural networks and fuzzy logic. I dabbled in them and had a few small productive applications. It turns out that the level alarm and dryer moisture prediction could have been done by material and energy balances, and pH control by a model predictive control. Fortunately, most of my time was spent on first-principal dynamic modeling and improving valve response and PID control strategies. The dozen or so people who were working on these leading-edge technologies were all gone after about 15 years and 25 million dollars of engineering time and software, with few lasting successes. They were all given packages, so maybe it turned out OK for them. I never got a package. The company wanted me to stay, but I eventually retired to avoid putting my retirement at risk due to bankruptcy.

In the 1990s, I witnessed a specialist from the leading supplier of multivariate statistical process control (MVSPC) come into a plant I supported. He had no plant experience, but was a highly educated data scientist. After a couple of weeks, he was extremely excited about all the great predictions in continuous sheet line quality he developed by simply dumping all the plant data into his software, including the far upstream batch operations. All of the predictions were bogus and bizarre to the point of being comical. We could laugh because his time was free in the hope we would buy the software.

Remember the promise of a “lights out” control room over 30 years ago with the introduction of the DCS? I saw one 20 years ago. The plant was shut down after the total solution to long-retired expertise was an emulation that had little to do with actual plant or its control system, and no documentation or training by people with plant knowledge.

IIoT threats and promises

IIoT can provide a synergy of accountants, data scientists, process engineers, analyzer specialists, automation professionals, operators and maintenance technicians working together, eliminating silos. The sense of community can spur creativity and deeper involvement. The many layers of automation and expertise can be exploited.

I think back on the opportunity assessments, when we had all of these people in the same room looking at historical data and opportunity sizing sheets. The insights and solutions we quickly gained led to process control improvements with yearly benefits that averaged 5% of the cost of goods. IIoT can potentially put us all functionally in the same virtual room with much more intelligent access to knowledge with an eye on better alarm management, HMI, procedure automation, batch control, instrumentation, basic and advanced control, and operator performance. If we take the wrong road, the room may only have a data scientist, potentially resulting in a “lights out” plant.

Too much data that doesn’t include changes in the process inputs and consequential changes in the process outputs, covering the possible range of operation, can result in tight models with false alerts and conclusions. For batch operations, 50 batches might be about right for data analytics, making sure they cover the full range of the quality assurance (QA) value (e.g., end-of-batch lab quality result).

Know your process and what measurements are important for that stage of processing. Take into account when controllers are not in service. Eliminate outliers. Strip off measurements that are obviously not applicable. For tight control, look at changes in the manipulated variable. If the flow measurement does not have sufficient rangeability, the knowledge is in the controller output. Batch operations require incredible ranges of utility flows as they progress from starting conditions, low level and no conversion to final operating conditions, high level and high conversion. Avoid flow measurements that are only relevant for a short period, or are shared by other equipment units.

When an operator sees something unusual, the first question often asked is, what maintenance is being done? Maintenance records need to be timely and integrated into and accounted for in system analysis. Simple calibration checks (a common occurrence for pH electrodes) can lead to false alerts.

Delve in details of data analytics

Data analytics are valuable for showing a batch is different. I personally see the worm plot of a QA value versus two principal components as useful, where the head of the worm is the most recent batch and the tail is the oldest batch. Outliers must be eliminated, of course. Ideally, you want the worm to be tightly coiling around the best QA value.

While the incentive is greater for high-value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost every first-principle model (FPM) has specific growth and product formation rates, the result of a multiplication of factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may die. It doesn’t matter if there is enough oxygen or glucose. Similarly, if there isn’t enough oxygen, it doesn’t matter if all the other conditions are fine. One way to address this problem is to make all factors as close to 1 and as constant as possible, except for the one of interest. It has been shown that data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation, if all other factors are constant and nearly 1.

Process control is about changes in process inputs and changes in process outputs. If there is no change, you can’t identify the process gain or dynamics. We know this is necessary in the identification of models for MPC, and for PID tuning and feedforward control. We often forget this in the datasets used to develop data models. A smart Design of Experiments (DOE) is really best to get datasets that show changes in process outputs for changes in process inputs, and to cover the range of interest.

If setpoints are changed for different production rates and products, existing historical data may be rich enough if carefully pruned. Remember, neural network models, like statistical models, are correlations and not cause-and-effect. Review by people knowledgeable in the process and control system is essential.

Time synchronization of process inputs with process outputs is needed for continuous, but not necessarily for batch, models, explaining the notable successes in predicting batch endpoints. Often, delays are inserted on continuous process inputs. This is sufficient for plug flow volumes such as dryers, where the dynamics are principally a transport delay. For back mixed volumes, such as vessels and columns, a time lag and delay should be used that is dependent upon production rate.

Neural network models are more difficult to troubleshoot than data analytic models, and are vulnerable to correlated inputs (data analytics benefits from principal component analysis and drill-down to contributors). Neural network models can introduce localized reversal of slope and bizarre extrapolation beyond training data not seen in data analytics. Data analytics’ piecewise linear fit can successfully model nonlinear batch profiles. To me, this is similar in principle to the use of signal characterizers to provide a piecewise fit of titration curves.

Disruptive technology can solve old problems

The Industrial Internet of Things (IIoT) is at its best when it solves the ancient problem of breaking down silos and bringing to bear the combined knowledge and perspectives of local, corporate and third-party experts.

Cover the basics

Recycle can cause a snowballing effect unless there is a flow loop with a fixed setpoint in the recycle path. Contaminants and inerts are often not measured and can accumulate. Unmeasured disturbances are a general pervasive problem. Limit cycles, interactions and resonance cause confusing situations, often best deciphered using power spectrum analysis and selectively putting loops in manual. What happened first may also be a timely clue, as proven by the age-old value of the first-out sequence.

More extensive and better measurements and automation (e.g., PID control and procedure automation) increases process repeatability and knowledge. Portable wireless transmitters can track down problems and monitor plant performance. Temperature transmitters with clamp-on sensors on coil, jacket or heat exchanger inlet and outlet piping can be used to detect fouling. Passing the inlet temperature through a dead time block with the dead time set equal to the transportation delay can synchronize the inlet with the outlet temperature. For reactors, conversion can be computed, enabling progression of batch or accumulation of contaminants or inerts. There are also significant opportunities in use of wireless measurements for pipe corrosion, spill detection, pump vibration and steam trap operation. For example, the Control Talk column “Controlling steam: What you don't know can hurt you” alerts us to the many problems causing poor plant performance not recognized as originating in steam traps.

Calculations can be developed by process and control engineers working together. Most notably, a future value block can provide a slope of the key batch profile to enable a decision on whether a batch cycle time should be extended to improve yield and reduced to increase capacity assuming no downstream process limitation, as noted in Control article, “Get the Most Out of Your Batch.” The same sort of calculation can be used to predict and detect compressor surge, as described in the Control feature, “Compressor surge control: Deeper understanding, simulation can eliminate instabilities.“

News from the front

Danaca Jordan, an original protégée of the ISA Mentor Program with more than eight years of plant experience in process control, offers the following perspective as she moves into her new IIoT role. “We are realizing greater data integration between business and production systems than ever before. By pulling information from sources other than just process instruments, like schedules, orders, raw materials, work history, etc., and combining it with process data, we are able to develop and put near real-time business-related metrics in front of chemical plant operators and front-line supervision. This is empowering them to control, optimize and make decisions on information that they previously would have only seen the results of in a monthly static report. This is requiring data scientists to learn new ways to handle time series data, and engineers to design and determine what metrics are actionable with data they couldn’t previously access.”

The “Process/Industrial Instruments and Controls Handbook Sixth Edition 2019,” by me and Hunter Vegas, offers focused guidance on how to improve and measure process performance. A key insight is that the myriad of improvements can be categorized as increases in process efficiency, capacity, flexibility and safety. Increases in process efficiency show up primarily as decreases in the ratio of the running average of raw material mass or energy used to the running average of product mass produced.

Increases in process capacity show up as an increased running average of product mass produced. Process capacity increases can be the result of higher production rates, faster startups, better ability to deal with abnormal operation, and greater onstream time. In all cases, the product mass must meet customer specifications.

Flexibility shows up as the ability to meet different production rates or different product requirements. Safety shows up as minimizing activations of the safety instrumented system (SIS) besides the obvious metric of minimizing the number of incidents including near misses.

Measure progress

The period for metrics must be large enough to eliminate noise and inverse response, and to provide the ability to make decisions based on objective and process type. For evaluating operator and control system actions, the period is normally the cycle time and operator shift for batch and continuous processes, respectively. The period is a month for correlation with accounting metrics. For alerting operators as fast as possible to the consequence of actions taken (e.g., changing controller setpoint or mode), the period can be reduced to be as short as six times the total loop dead time. The metrics at the end of a month, batch or shift is historized.

There is often a tradeoff between process metrics. Increasing production rate often comes at the cost of decreasing efficiency. Similarly, changing production rates reduces both process efficiency and capacity, since movement to the new process operating point takes time and the product produced in the transition may not meet specifications.

Increases in yield (decreases in raw material use) can be taken as an increase in process efficiency if the raw material feed rate is accordingly decreased. There may be an accompanying decrease in the cost of recycle and waste treatment operations. Alternatively, increases in yield can be taken as an increase in process capacity by keeping the raw material feed rate constant. Prolonging a batch can improve yield and thus efficiency, but lengthening of batch cycle time translates to less batch capacity, particularly as reaction rates or purification rates decline near endpoints.

Time to reach a new setpoint can be shortened by overdriving the manipulated variable past its final resting value. For processes on large volumes, such as distillation columns and reactors, the time reduction is critical. For batch processes, reaching a new composition, pH, pressure or temperature setpoint is often not possible without the overdrive. The process efficiency is reduced during the overdrive, but the process capacity is increased, either as a reduction in batch cycle time or an increase in continuous production rate upon reaching setpoint.

Especially important is the translation of online metrics to the bottom line effect on production unit profitability in the plant accounting system. This means that benefits must be reported on a monthly basis, and presented per accounting format and procedures. Obvious but often not addressed is the buy-in by the plant accounting department and plant management. This is best done by real-time accounting.

A digital twin as part of a virtual plant offers considerable knowledge, including experimentation. Online metrics can be developed that can moved to the actual plant to provide indications of plant efficiency and capacity in dollars with intelligent time frames (e.g., shift, batch and accounting month). The digital twin offers flexible and fast exploring=> discovering => prototyping => testing => justifying => deploying => testing => training => commissioning => maintaining => troubleshooting => auditing => continuous improvement, showing the “before” and “after” benefits of solutions from online metrics (as detailed in the Control feature, “Virtual plant virtuosity.” The potential synergy with IIoT is largely overlooked.

The proper use of IIoT should increase the performance of engineers and technicians, freeing them up to focus on a higher level of accomplishment. IIoT can result in an increase in job opportunities, as discussed in Control Talk column, “The multiplier effect.” See the online version for a greater view of past history, and the future possibilities of online calculations and metrics.

To get much more on the importance of making the right decisions, see the Control Talk column series, “Drowning in Data; Starving for Information - 1, 2, 3, and 4.” We are at the IIoT crossroads. Let’s all work together to take the right road to a more intelligent future.

About the author: Greg McMillan
About the Author

Greg McMillan | Columnist

Greg K. McMillan captures the wisdom of talented leaders in process control and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams and Top 10 lists.

Sponsored Recommendations

2024 Industry Trends | Oil & Gas

We sit down with our Industry Marketing Manager, Mark Thomas to find out what is trending in Oil & Gas in 2024. Not only that, but we discuss how Endress+Hau...

Level Measurement in Water and Waste Water Lift Stations

Condensation, build up, obstructions and silt can cause difficulties in making reliable level measurements in lift station wet wells. New trends in low cost radar units solve ...

Temperature Transmitters | The Perfect Fit for Your Measuring Point

Our video introduces you to the three most important selection criteria to help you choose the right temperature transmitter for your application. We also ta...

2024 Industry Trends | Gas & LNG

We sit down with our Industry Marketing Manager, Cesar Martinez, to find out what is trending in Gas & LNG in 2024. Not only that, but we discuss how Endress...