In 2019, Chevron’s control information system team began a complex, multi-million-dollar migration from legacy systems at its Southern California refinery. While the pandemic delayed initial plans, as it did across much of the world, it also provided some opportunities for project additions. Rickie Ohiri, control systems team lead at Chevron’s Richmond refinery and team lead for the migration project, presented an overview of the migration plans, schedule, challenges and lessons learned this week at Honeywell Users Group (HUG) Americas in Orlando, Fla.
The migration project was funded to upgrade the blender system and the Experion servers at Chevron’s El Segundo refinery. “The initial scope of the project primarily was to migrate the Experion system from R431 to R511,” Ohiri said. The project was driven by the need to get off old legacy operating systems, running Windows 7 and Windows 2000, and get onto the IT-compliant Windows 10 version. The project also planned to upgrade the blender software from PBM (profit blending and movement) version R430 to the latest PBM R510. Finally, the project would update the graphics. The refinery uses graphics integration from Lin & Associates with the ALTIUS package and wanted to upgrade from 5.2 to 5.6.
“Some of our stretch goals, as part of the upgrade to the Experion system and servers, were to virtualize as part of our lifecycle efforts,” Ohiri said. The virtualization for this project included the tank farm network, and the local control network (LCN) associated with the tank farm. “We also wanted to consider doing more of a three-cluster setup to support the entire refinery,” Ohiri said. While the project was triggered by IT-compliance software issues, the project team also wanted to fit in as much as it could because the project would require significant downtime.
“Lifecycle virtualization helps to improve deployment. It helps you to become fault tolerant, and it also helps to reduce the costs of having the equipment in hardware,” Ohiri said. The upgrade would, however, impact production with a 72-hour window of downtime. The specific time for the three-day window would ultimately be dictated by the supply chain team based on market demand. In the end, the dates floated not because of the market, but because of the pandemic.
Project schedule: pandemic demands
Others plans would change too, as the pandemic progressed. Originally, the team planned to do factory acceptance testing (FAT) in Toronto, including on-site training and commissioning and hands-on training with the operators to get them acclimated to the new system.
The project held its kickoff meeting in January 2020 with Honeywell on-site to conduct preparation for the blender migration. By February, the team confirmed the Experion release, and the team prepared dates for taking images and database backups. “So we were getting there. As you can see with those first two steps alone, we’re pretty much ready for commissioning,” Ohiri said. The team was pressing the supply chain team to approve the downtime window, when on March 19, California issued its first statewide stay-at-home order, and the project was put on hold.
By May, the project team was getting pressure from leadership, Ohiri said. “Market demand was completely curtailed, and a lot of projects were being asked to slow down and work to postpone in order to reduce expenses at the refinery,” he added. “However, we got support from our local leadership to continue.”
Operations at the refinery had been restricted to only essential personnel, and the project team needed to find a way to continue the project within the new pandemic restrictions. Ultimately, because other projects were delayed, freeing up some additional resources, this is what allowed the project to add the refinery virtualization infrastructure into the scope as well. “This added a lot of complexity to the project and a lot of pressure for us to continue and then also execute,” Ohiri said.
Also, in May, the team had to rethink the FAT, because of restrictions on flights and travel to Toronto. By August, the team had received all the servers on-site. They were loaded, wired and tested for quality. Then, the servers were driven 10 miles away to Torrance, California, at a Honeywell facility there for FAT. The team was careful to practice social distancing, and wear gloves and masks to keep the team safe and allow them to execute the test locally.
In September, the team conducted the integrated FAT, testing not only the blender software, but also the Experion system on the virtual machines and in the graphics. After successful testing, the project got the go-ahead for commissioning in December. “I can tell you, we worked long hours,” Ohiri said. What were typically 10- to 12-hour days became 18-hour days, as the team ran into a few challenges along the way. Even with those challenges, the team was able to go live within the 72-hour window. “It was a huge win for the team,” Ohiri said.
Challenges and lessons learned
“One of the first challenges was we had some domain issues,” Ohiri said. PBM on the install disabled the W3SVC, a Windows service that makes the Internet Information Services (IIS) work. “We had to have a domain exception to overcome this challenge,” he added. Another big challenge involved the local group policy. “This was something that PBM requires, so what we found was because of our group policy that we have standard, we had blend services and Experion services that just stopped running because of our group policy,” Ohiri said. “So one question that we continue to ask is how do we get to the point where it’s visible from Honeywell what default group policies are recommended?”
The team also had a challenge after commissioning Matrikon OPC for PLC to SCADA communications. “We would lose view and control of those SCADA points on the graphics,” Ohiri said. They discovered that the initial license was temporary for a 30-day period. “We didn’t have the license activated, so that was a quick fix, but it did cause some pains for operations,” Ohiri said.
After installing thin clients for the consultation virtual machines, “we had some blue screens of death,” Ohiri said. They found that the firmware wasn’t solidified after install, which took some troubleshooting to discover.
Some old legacy gauges for tank level and temperature were using outdated hardware that wasn’t fully supported, so they would lose communication with those gauges. The fix was installing Moxa serial to Modbus TCP/IP converters. “That made the now redundant system reliable,” Ohiri said.
“One of our biggest lessons learned is the importance of being able to build these virtual machines ahead of time,” Ohiri said. “That would have helped because then we could have tested and played around with it earlier.”
Another key lesson learned was to use Linux OS universal thin clients, as opposed to Windows-based, because they are easier to configure and maintain. And finally, be sure to keep the development system updated for pre-commissioning tests. “The dev system will allow us to test these things ahead of time and make commissioning go smother,” Uhiri said.
Among the project successes for the team was that all 15 team members involved delivered the project within the 72-hour window without any health or safety incidents. “No one contracted the coronavirus, so that was huge,” Ohiri said. “We were really concerned. It was a highly sensitive time, so kudos to the team for that.”
The editors of Control, Control Design and Smart Industry are reporting live from 2022 Honeywell Users Group in Orlando, Florida, to bring you the latest news and insights from the event. When the event comes to a close, the best, most important coverage will be compiled into a report by the editors.
Register now to pre-order the report and be among the first to receive it in your inbox.