Big data analytics to manage asset performance

Determine which assets warrant attention, then use analytics to decide when and how to intervene.

By Don Rozette

Maintaining the physical health of your assets can be a challenge, and it becomes even more challenging when your organization is in a “cost cutting mode” while still attempting to optimize performance and maintain safety. Failures such as mechanical failure, operator error or lack of oversight can cause unplanned downtime and significant costs to the organization. For instrumentation and process control professionals, it’s often unclear how their day-to-day actions impact the bottom line.

To support larger business goals, process control professionals can adopt an asset performance management (APM) strategy that helps capture and analyze machine data to transform information into strategic actions from the facility floor to the corporate office.

Today, most organizations and their maintenance divisions are siloed. As a result, process control professionals may not be aware of maintenance efforts or issues being managed by other reliability engineers in the organization. If maintenance reports and best practices aren’t being shared openly, failures and fixes are repeated. APM initiatives allow institutions to break down organizational silos, enabling all teams to set up realistic operating targets, streamline maintenance and reliability efforts, and strengthen overall asset reliability.

White Paper: Wireless Connectivity for the Internet of Things

A thorough APM program begins with collecting data. The challenge comes in understanding how to organize and prioritize that data. Every facility, plant, unit or organization has hundreds of thousands of instruments generating megabytes of data every minute. Therefore, process historians are more important than ever, and required to store massive amounts of information. Trying to perform advanced analytics on all of this data is nearly impossible and, more importantly, is not an efficient use of resources. The majority of data being collected has little value. and only serves to communicate status information that doesn’t negatively impact the capabilities of the instrumentation. Therefore, process engineers should consider which assets—equipment and machines in the plant—to prioritize, so they can apply the appropriate level of data analytics and monitoring to each.

One thing many organizations today are not doing is proactively analyzing historical data to help avoid past failures.

For engineers who manage tens to hundreds of thousands of instruments on a variety of control platforms, the first step is identifying the criticality of the instruments and associated assets, meaning the risks and associated costs of a particular asset or equipment failure and how that compares to the other assets. It’s common to manage 75% of instruments with a “run to fail” strategy, but it’s the 25% that qualify as being part of an APM initiative that need to be watched and analyzed more closely. Criticality can be measured on a smaller or larger scale depending on the size and complexity of the organization. For this example, assume there are three levels of criticality: low, medium and high. To help define and explain instrument criticality, consider Figure 1, which shows an “onion layer” example of a given process and the controls around that process.

A low-criticality instrument could be described as providing information only (no control or decision-making) and isn’t tied to the layers in the onion (e.g. no safety impact). For example, a low-criticality instrument could be a status indication of whether a pump was on or off, or whether a valve was open or closed. As noted above, these assets would be part of the 75% majority that operate on a “run to fail” strategy and don’t need to be monitored closely as part of an APM initiative.

A medium-criticality instrument could be described as part of a control or decision-making group of instruments. For example, take a vessel that should be operated at 250 psig with control that ensures this having three parts: a pressure sensor, a control algorithm and a control element such as a valve. Each of the three elements in this control loop could classify as medium-criticality. In Figure 1, a medium-criticality instrument would be a basic control process or an alarms/operator intervention.

Finally, a high-criticality instrument could be described as something whose failure to function would either cause a hazardous condition or whose failure to function would not prevent a hazardous condition. In Figure 1, a high-criticality instrument would be part of a safety instrumented system (SIS) or a physical relief.

Once an organization has appropriately classified its assets and prioritized its data, the question remains: what comes next? One thing many organizations are not doing today is proactively analyzing historical data to help avoid past failures. Collecting and monitoring data in the short term allows process engineers to identify failures when they occur, but does not help predict or prevent repeat failures. APM allows users to model the critical elements of the process and the interactions between variables and risks or previous failures that are missed in the typical communication network approach used to convey health in the process control system.

APM is key in breaking down organizational silos by simply and accurately depicting the health of the assets being managed. From a control system perspective, the process control professional can measure things like number of alarms, alarm rates, number or percentage of controls in “normal” mode, etc. What the control system can’t easily do is translate these measurements into actionable information. For example, if five alarms currently exist and 9% of the controls aren’t operating in their normal modes, what should be worked on first? APM makes this type of decision easier because it allows the user to see the information from the perspective of the risks it’s mitigating and the assets being protected. Also, the asset health view will contain a more comprehensive or holistic view of the asset as compared to the control system. Breaking down the silos can be done at the intersection of the asset, the strategy used to manage the asset, and the data related to how the asset is being operated today, as well as in the past.

Ultimately, companies that standardize processes and proactively leverage historical data will be able to more accurately predict asset failure, minimizing downtime and maximizing productivity. As the Industrial Internet of Things (IIoT) continues to accelerate and emerging technologies become more readily available, asset-intensive organizations need to disrupt the status quo to transform both the operations as well as the culture around data analytics and reliability.  

Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


RSS feed for comments on this page | RSS feed for all comments