British Columbia refinery succeeds with Seeq's self-service analytics
Everyone can feel their own pulse, but it takes a cardiologist to interpret all the data the beats contain, diagnose whatever physiological are present, and prescribe what to do about them. Likewise, many process industry players are calling in data scientists to find deeper details in their process signals, and adopting analytics software to streamline extracting useful intelligence that can improve decisions and add value.
For instance, Parkland Corp.’s Burnaby refinery near Vancouver, B.C., Canada, recently launched a self-service analytics program using Seeq’s data analytics software. Its engineers have been using the software for about a year and a half for daily monitoring and incident investigation of its main air blower (MAB), sitewide inferential modeling, boilers, wet gas compressor and flare stack, which helps their colleagues save time, alleviate low-value data cleaning tasks, and reduce organizational friction that analytics efforts can cause.
The 88-year-old refinery produces about 55,000 barrels per day, and meets about 25% of the province’s gasoline and diesel needs. It’s also working on expanding its range of bio-feedstocks such as tallow and canola oil to reduce it greenhouse gas (GHG) emissions and carbon footprint.
Filtering datasets
Siang Lim, data scientist at Parkland, reports that Burnaby’s first, self-service analytics project was conditional filtering of large process datasets. Lim and Sarah Mahmoud, process engineer at Parkland, presented “Self-service analytics for processing hydrocarbons” at Seeq’s 2022 Conneqt event in Austin, Tex.
“We’re trying to find and count the blips in our flow measurements across multiple years. It sounds easy, but this a surprisingly time-consuming and tedious process to do in Excel,” says Lim. “The solution was to use Seeq’s Value Search and Chain View functions to apply the right filters and hide irrelevant data to make it easier to gather the results we needed. By doing this in Seeq, we managed to complete the analysis in an hour, while previous attempts took more than 40 hours. The time saved also allowed us to do higher-value work on improving the quality of our investigation.”
Lim reports the refinery’s engineering team needed to know more about the blips and anomalies in its flow measurements because an earlier hazard and operability (HazOp) study found risks due to the MAB’s low-flow trip point being set too low. The engineers recommended a higher setpoint and tried to raise it. However, their operations colleagues pushed back because they’d already observed random dips in flow measurements, and were concerned that a higher setpoint could cause spurious trips in the unit or cause a process upset.
“Was that true? We didn’t know, so we had to do due diligence to find out how often these flow anomalies were happening before making further changes,” says Lim. “We wanted to find times where our feed rate was above a certain value, indicating the unit was running normally, and when airflow was below a certain value, indicating those flow anomalies.”
To find dips that might only last a few seconds and only occur every few weeks or months over five or 10 years, Lim states his team tried to use Excel, but it was too slow to apply its formulas and filters over millions of rows in such huge files. He adds the team sometimes found it difficult to build Aveva (formerly OSIsoft) PI System queries because they weren’t familiar with some of the query language, SQL or Python scripts. Plus, the initial spreadsheet attempt took 40 hours due to historian setting adjustments, questions about the data and resolution, and hitting Excel’s row limit, which required creating multiple files.
Fortunately, while conducting a trial with Seeq, Lim adds it was easy for the team to define composite conditions they needed, and adjust their investigation range to however many months or years they needed. This avoided the need to tweak historian retrieval settings, run if-else formulas in Excel, and filter millions of rows and multiple files—and enabled the conditional filtering attempt to succeed in just one hour.
“With Seeq, we rapidly and correctly identified all time periods with anomalies by using Chain View to hide the irrelevant datasets, and its analysis and results gave us the confidence to procced with setpoint changes,” says Lim. “This is important because a spurious trip would have cost about $1 million per day in downtime, while not proceeding with changes would’ve resulted in unacceptable safety risks.”
Inferential performance assessment
The Burnaby refinery’s second self-service analytics project was assessing inferential models—also known as soft sensors—which are used for estimating process variables without online analyzers or measurements. However, model performance must be monitored to address equipment or process changes, even though it’s difficult to align timestamps for predictions compared to lab values.
In this case, Seeq’s Capsules function easily realigns timestamps for all samples and automatically calculates predictions errors. This avoids manually aligning sample timestamps, which is also very tedious in Excel. The team also used Seeq’s Asset Trees and Asset Scripts to scale its calculations to multiple inferential models.
Lim reports the simplest way to measure model performance is the residual method, which uses the absolute difference between predictions and lab results, which is known as the absolute error or residuals.
“This sounds simple, but it’s hard because of the data cleaning part. It may take several hours to get lab results, and then what if you have to do it 10 or 100 times or for the whole year? This is where it becomes tedious,” says Lim. “Plus, how to do you know the time shift? We have an indicator function in the DCS, so when we take a sample, the function goes from zero to one, and goes back when we get results. We convert that indicator function into Seeq’s Capsules, use its properties to score those values, and align predictions with lab values. We display raw data in three lanes, including lab samples, predictions from the model and indicator functions.” (Figure 1)
Once lab and prediction signals are converted to scalars bounded by the indicator function’s capsules, Lim reports that differences in predictions versus lab results are easy to calculate as a Seeq Formula. This aligns the predictions and lab results, and presents them on a cleaner, two-line “after-realigning” graph. This means the lab and prediction data is clean and ready for residual calculations, which Seeq displays as a simple, one-line bargraph.
“The results of doing inferential calculations in Seeq are that we avoided manually aligning timestamps in Excel by leveraging capsule properties,” adds Lim. “We also used these calculations as a template. This makes it easy to scale them to other inferentials by defining them as Asset Trees groups. This was a huge win for us to do these calculations in a more automated way.
“Simple tasks can be tedious without the right tools. Depending on the problem they’re trying to solve, users often don’t need or want machine learning or advanced algorithms. With just the ability to quickly access, filter and clean our data. Seeq makes it easy to do this, which empowers our engineers to quickly get the results they need.”
Three more case studies
Sarah Mahmoud, process engineer at Parkland, reports that the Burnaby refinery’s process engineering group also uses Seeq’s software to help with its day-to-day operations.
For example, the refinery’s boilers produce its high-pressure steam, and use sweet fuel gas from more than 16 sources in the plant and also purchase natural gas. However, when the engineering group was seeking the cause of a flooded boiler, it was time-consuming to use historians and Excel to pull data from those sources for their investigation.
Consequently, they used Seeq to extract and trend large amounts of data faster, and employed its Value Search function to identify parameters outside normal operating ranges. “This investigation took a day instead of a week, and revealed stream and events before the flood, which indicated that one of the larger fuel gas producers had flooded the boiler,” says Mahmoud. “This input also led to improved start-up procedures to prevent similar incidents in the future.”
Similarly, the refinery has a wet-gas compressor that runs on a steam turbine, but the flowmeter for the steam was broken for about two years despite several repair attempts. This situation was bearable because operators could simply examine the control valve opening for directional changes, but recent efforts to reduce steam consumption required a better baseline. So, the engineering group used Seeq’s XY Plot and Model & Predict functions to build a connection between the turbine governor valve and the steam flow with data from 2016-18.
“This correlation was used for a steam reduction test run and daily, real-time monitoring of approximate steam consumption using the live governor position,” says Mahmoud. “We created the plot in about five minutes, and found the relationship had an r2 of 0.973, so we could be very confident in this correlation. Rather than generating coefficients in Excel, submitting an IT ticket to PI for a calculation tag, and monitoring on PI/XHQ, we used Seeq to easily create a live-monitoring tool and calculation value in its energy-tracking dashboard. Now when we do test runs, we can see a result and directly relate it to a cost savings. We also got these results in one day instead of weeks, and saved an IT resource and a PI Tag slot.”
Finally, the engineering group wanted to determine if the refinery was flaring above its design purge requirements, wasting energy and increasing greenhouse gas (GHG) emissions, but it needed a baseline to establish targets. So, the group used Seeq’s Value Search function to find times when all related equipment was on and operating at high rates, and used Seeq’s Signal from Condition function to find average baseline flows, minimal actual rates and optimal flaring periods.
“We quickly analyzed more than 10 years’ worth of data, and identified and mitigated $600,000 of excess flaring in 2022,” concludes Mahmoud.
About the Author

Leaders relevant to this article: