Big data analytics helping to improve asset performance management

Sept. 30, 2016
Companies that standardize processes and leverage historical data will accurately predict asset failure, minimizing downtime and maximizing productivity.

Maintaining the physical health of your assets can be a challenge, and it becomes even more challenging when your organization is in a “cost-cutting mode,” while still attempting to optimize performance and maintain safety. Failures such as mechanical failure, operator error or lack of oversight can cause unplanned downtime and significant costs to the organization. For instrumentation and process control professionals, it’s often unclear how their day to day actions impact the bottom line.

To support larger business goals, process control professionals can adopt an asset performance management (APM) strategy that helps capture and analyze machine data to transform information into strategic actions from the facility floor to the corporate office. 

Today most organizations and their maintenance divisions are siloed. As a result, process control professionals may not be aware of maintenance efforts or issues being managed by other reliability engineers in the organization. If maintenance reports and best practices aren’t being shared openly, failures and fixes are repeated. APM initiatives allow institutions to break down organizational silos, enabling all teams to set up realistic operating targets, streamline maintenance and reliability efforts and strengthen the overall asset reliability. 

A thorough APM program begins with the collection of data. The challenge comes in with understanding how to organize and prioritize the data. Every facility, plant, unit or organization has hundreds of thousands of instruments generating megabytes of data every minute. Process historians therefore are more important than ever and required to store massive amounts of data. Trying to perform advanced analytics on all of this data is near impossible and, more importantly, is not an efficient use of resources. The majority of data being collected has little value and only serves to communicate status information that does not negatively impact the capabilities of the instrumentation. Therefore, process engineers should consider which assets – equipment and machines in the plant – to prioritize so they can apply the appropriate level of data analytics and monitoring to each asset.

For engineers who manage tens to hundreds of thousands of instruments on a variety of control platforms, the first step is identifying the criticality of the instruments and associated assets, meaning what are the risks and associated costs of a particular asset or equipment failure and how does that compare to the other assets? It is common to manage 75 percent of instruments with a “run to fail” strategy, but it is the 25 percent that qualify as being part of an APM initiative that need to be watched and analyzed more closely. Criticality can be measured on a smaller or larger scale depending on the size and complexity of the organization. For this example, assume there are three levels of criticality: Low, Medium and High. To help define and explain instrument criticality, consider the diagram below that provides an “onion layer” example of a given process and the controls around that process. 

[javascriptSnippet]

A low criticality instrument could be described as providing information only (no control or decision making) and is not tied to the layers in the onion (e.g. no safety impact). For example, a low criticality instrument could be a status indication whether a pump was on or off, or whether a valve was open or closed. As noted above, these assets would be part of the 75 percent majority that operate on a “run to fail” strategy and do not need to be monitored closely as part of an APM initiative.

A medium criticality instrument could be described as part of a control or decision making group of instruments. For example, take a vessel that should be operated at 250 PSIG, the control which ensures this has three parts – a pressure sensor, control algorithm and a control element such as a valve. Each of the three elements in this “control loop” could classify as medium criticality. In the diagram above, a medium criticality instrument would be a “basic control process” or an “alarms/operator intervention.”

Finally, a high criticality instrument could be described as something whose failure to function would either cause a hazardous condition or whose failure to function would not prevent a hazardous condition. In the diagram above, a high criticality instrument would be called a “safety instrumented system” or a “physical relief.”

Once an organization has appropriately classified its assets and prioritized its data, the question remains: what comes next? One thing many organizations today are not doing is proactively analyzing historical data to help avoid past failures. Collecting and monitoring data in the short term allows process engineers to identify failures when they occur but does not help in prediction or prevention of repeat failures. APM allows users to model the critical elements of the process and the interactions between variables and risks or previous failures which are missed in the typical communication network approach used to convey health in the process control system.

APM is key in breaking down organizational silos by simply and accurately depicting the health of the assets which are being managed.  From a control system perspective, the process control professional can measure things like number of alarms, alarm rates, number or percentage of controls which are in the “normal” mode.  What the controls system cannot easily do is translate these measurements into “actionable” information. For example, if currently five alarms exist and 9 percent of the controls are not operating in their normal modes, what should be worked on first? APM makes this type of decision making easier because it allows the user to see the information from the perspective of the risks it’s mitigating and the assets that are being protected. Additionally the asset health view will contain a more comprehensive or holistic view of the asset as compared to the control system.. Breaking down the silos can be done at the intersection of the asset, the strategy used to manage the asset and the data related to how the asset is being operated today as well as in the past.  

Ultimately, companies that standardize processes and proactively leverage historical data will be able to more accurately predict asset failure, minimizing downtime and maximizing productivity. As the IIoT continues to accelerate and emerging technologies become more readily available, asset-intensive organizations need to disrupt the status quo to transform both the operations as well as the culture around data analytics and reliability. 

[sidebar id =1]