Key Highlights
- Contextualizing data through OPC UA and historian software enables better decision-making and early detection of process anomalies.
- Incorporating AI and machine learning enhances equipment insights, providing real-time recommendations and improving operational efficiency.
- Conducting proof of concept (PoC) tests ensures new technologies meet business needs before full deployment.
Just as data scientists try to organize what amounts to closets stuffed with mismanaged information, process-industry users must also identify what information belongs where. This is why both groups seek to clearly label, assign and segregate their data, so it’s easy to retrieve and use as needed.
Unfortunately, the latest snag is that process information used to be generally organized, managed and communicated according to the longtime, five- to seven-layer Purdue reference architecture and ISA-95 standard for automation, networking and enterprise systems. However, increasing digitalization has been flattening that model lately, prompting revisions in how data is organized, and allowing other types of information to be included.
“Usually, process data is created at the sensor level, transferred to PLCs and safety systems, and relayed to HMIs for operators to use. It’s also sent to manufacturing execution systems (MES), where things are getting more convoluted as more new and different data inputs get added,” says Karthik Gopalakrishnan, solutions consultant for digital transformation at Yokogawa. “This includes time-series data, 90% of which comes from sensors and tags. However, users are also adding structured data sheets and software-based folders in electronic document management systems (EDMS) at the MES layer. These documents indicate what users should do to mitigate alarms coming from individual components like compressors, and better empowers them to act to get their equipment into a safe and stable state.”
While input like time-series data (TSD) is traditionally organized vertically from sensors and tags to enterprise levels, Gopalakrishnan reports that EDMSs, enterprise resource planning (ERP) systems and warehouse management applications are integrated horizontally because they generally provide support across existing operations and networks. To combine vertical and horizontal data sources for better analytics, he recommends that users determine what’s driving their need for it.
“Is it mainly a business-level case or are they looking for an operations solution?” asks Gopalakrishnan. “Many suppliers get down to the I/O level, but not the equipment level. Yokogawa supports processes and devices that typically can’t have redundancy because it’s too costly and space-consuming, such as valves, pumps, compressors, chillers, and other heavy equipment that can still face catastrophic issues. Consequently, Yokogawa does predictive maintenance for these devices by using their TSD to model good behavior.”
For instance, for a typical compressor running during a particular time period, Yokogawa can compare its data snapshot to similar fingerprints for numerous previous compressors. These precedents include leading indicators of when they could potentially go haywire, but also enable comparisons of current operations to ideal working conditions. In either positive or negative cases, any difference in behavior can be compared to prior situations to show whether the present compressor is running properly or not, and help to identify root causes, too.
Get your subscription to Control's tri-weekly newsletter.
“Evaluating flow data around a compressor and modeling it can help inform users when it may experience an issue or failure in the next several months,” explains Gopalakrishnan. “This frees users from preventive maintenance that’s scheduled every six months, and reactive maintenance that just responds to issues, which are both inefficient and costly. Instead, they can now analyze accessible data, predict when problems are most likely to occur, and focus their resources on solving them, which is more efficient than simply following a schedule.”
However, even if data is readily accessible, Gopalakrishnan reports it must be put into context before it can inform wiser decision-making. This can be done by sending it via OPC UA networking to historian software, such as Yokogawa’s Exaquantum that generates time-series data, and can integrate and coordinate other non-historian sources, such as reliability and maintenance information, as well as manual or structured documentation about previous issues. Once several data sources have been contextualized and managed, they can be mapped to the user’s ideal behavior for how they want their process to perform, which can be used to create a model of how its devices need to operate.
This procedure can also be used to alert users when their process is exhibiting anomalies or otherwise not running as it should in conformance with the precedents in the model. These alerts can also include documentation about how to fix the initial, underlying problem as part of a failure mode and effects analysis (FMEA), which makes this data more structured because it lets users know what they need to do. Yokogawa also suggests that users take these alerts and instructions, and add their recommendations to them, so they can fine tune their process optimizations, and pass on their expertise and best practices to less-experienced personnel.
“Of course, artificial intelligence (AI) can help with all these efforts, and we’re looking into including AI and machine learning (ML) at multiple levels to help users determine how their processes should behave, and how to fix them more quickly,” adds Gopalakrishnan. “In fact, we’re starting to add extensions to Yokogawa’s OpreX Asset Health Insights software that can provide ML-based recommendations, and provide insights into equipment data more quickly. Users will be able to click and learn what to do for individual issues, just as if they were asking a person. It will also give them context and expertise beyond what’s presently available.
“Even so, my advice is don’t just adopt AI because it’s cool. Ask how it can meet business requirements and help solve problems. And, just as Yokogawa does with any new instrument or technology, conduct a proof of concept (PoC), so it can demonstrate and prove its capabilities.”
This is part eight of Control's September 2025 data analytics cover story. Read the other installments here.
About the Author

Leaders relevant to this article:

