computer generated illustration of big data icons over process facility

Can big data solve the energy crisis?

Oct. 6, 2022
Avoiding or foreseeing calamities that can shut down a plant is one of the promises of big data analysis

“Show me the suction pressure just before the compressor tripped,” asked the refinery’s East Side process superintendent. Even in the blast-fortified administration building, the rumble of the flare could be heard (felt) as operations prepared the somewhat routine lockout/tagout of the coker compressor for repairs.

The unit engineer clicked her mouse in search of the instrument tags of interest, overlaying trends of the last four times the compressor tripped. While one trip was clearly caused by a false high-level indication in the suction drum, her trends revealed scarcely anything interesting preceding the more recent events. The staff had several decades—gigabytes—of process history at their behest. But sadly, sampling intervals were too infrequent to reveal much in the seconds leading up to the trip. Data compression algorithms created in the last century and scarce hard drive space also interfered with the investigation.

Was there any movement in bearing temperatures prior to the shutdown? Never mind, the trends were flatlined thanks to the “ancient” (decades-old) tuning of the compression algorithm. Since the historian was now managed by an IT organization eight time zones away, having a process-savvy individual participate in any mindful improvements to sampling and compression tuning settings was a significant challenge.

Politicians are bludgeoning the industry with statements such as, “We need more refining capacity!” One way to maximize capacity is not extremely esoteric: stay running. Meanwhile, avoiding or foreseeing calamities that can shut down a plant is one of the promises of “big data” analysis, and a refinery or any process plant that has optimization and reliability investments spanning decades has huge data lakes. To our dismay, data acquisition that was once configured to track the relatively slow changes in massive vessels, reactors and distillation columns under steady state operations are most likely incapable of capturing some of the variables of interest that might cause an unplanned outage.

We might find the machinery monitoring tools focused on rotating equipment reliability have faster sample times, but that data lake isn’t connected to the process history lake. Try using OPC, famously renamed “oh, please communicate” by end users, and you may find recent fortification of DCOM in Microsoft operating systems has broken it. The fortunate may find someone capable of implementing the tweaks and hacks (registry editing) that restore connections, but then how long until you find it’s broken again? Yet another potential treasure trove of reliability data is the device (instrument) health monitoring system. One instrument reliability specialist simply wanted a list of all the now-obsolete smart valve positioners on that side of the refinery, but extracting it in a form another database could consume was frustrating, even for individuals savvy enough to manipulate XML or PRN files in Excel.

Meanwhile, the machinery reliability folks like to tell operations, “You’re maxing out our equipment, that’s why it’s breaking down.” But if the turbine says it can sustain 10,000 rpm, we need it to run as promised. The market is sold out and aching for increased supply. How hard can I push the kit before breakdowns outweigh the increased production? Maybe big data could provide some answers, if only the data lakes were joined.

Insight into machinery health and unanticipated process interruptions could point to an optimum operating point, but it’s only one aspect of plant availability. Ambient conditions have an impact when cooling water isn’t cool, or an unanticipated freeze reveals failed heat tracing or steam traps. An investment in electric heat trace monitoring can be useful, but the data lives on its own server.

Despite the challenges, our peers are joining these lakes and one way or another, visualizing and analyzing them with the goal of improving plant availability. Any near-term increase in refining capacity can only come from minimizing downtime and rate cuts, along with optimization derived from controls and process professionals dredging the lakes of big data. 

About the Author

John Rezabek | Contributing Editor

John Rezabek is a contributing editor to Control

Sponsored Recommendations

2024 Industry Trends | Oil & Gas

We sit down with our Industry Marketing Manager, Mark Thomas to find out what is trending in Oil & Gas in 2024. Not only that, but we discuss how Endress+Hau...

Level Measurement in Water and Waste Water Lift Stations

Condensation, build up, obstructions and silt can cause difficulties in making reliable level measurements in lift station wet wells. New trends in low cost radar units solve ...

Temperature Transmitters | The Perfect Fit for Your Measuring Point

Our video introduces you to the three most important selection criteria to help you choose the right temperature transmitter for your application. We also ta...

2024 Industry Trends | Gas & LNG

We sit down with our Industry Marketing Manager, Cesar Martinez, to find out what is trending in Gas & LNG in 2024. Not only that, but we discuss how Endress...