CG1310-emerson-show

Safety Lifecycle Prevents Systemic Failures

Oct. 9, 2014
Safety Instrumented Systems: Why Do They Fail?
About the Author: Jim Montague
Jim Montague is the Executive Editor at Control, Control Design and Industrial Networking magazines. Jim has spent the last 13 years as an editor and brings a wealth of automation and controls knowledge to the position. For the past eight years, Jim worked at Reed Business Information as News Editor for Control Engineering magazine. Jim has a BA in English from Carleton College in Northfield, Minnesota, and lives in Skokie, Illinois.

Check Out Montague's Google+ profile.

Having a safety instrumented system (SIS) doesn't make a process control application safe. Adopting it intelligently and managing it vigilantly does.

"All systems fail at some point in time," said Rahul Bhojani, technical authority for downstream at BP. "SISs can have random or systematic failures. Random failures are usually the result of degraded mechanisms in the hardware, such as corrosion or thermal shocks. Systematic failures are due to human error during the lifecycle of the SIS or process, so they can occur during any phase of that lifecycle."

Bhojani and Len Laskowski, principal technical consultant at Emerson Process Management's Midwest Engineering Center, presented "Safety Instrumented Systems: Why Do They Fail?" at the Emerson Global Users Exchange this week in Orlando, Florida.

"All systems fail at some point in time." BP's Rahul Bhojani, together with co-presenter Len Laskowski of Emerson, discussed a range of "gotchas" that can torpedo the best laid SIS plans.

"The good news is that failures can be learned from and help produce process safety standards, such as OSHA PSM 1910.119, as well as SIS standards, such as ISA 84/IEC 61511, that have evolved over time," explained Bhojani. "Some of these standards have requirements, while others have recommended good practices. Either way, it's important that applicable requirements are understood and followed."

"Details are important in managing safety systems," Bhojani continued. "You have to get a lot right in safety instrumented functions (SIFs) to get them to perform properly." So how can you spot the issues? Bhojani advises taking several essential steps:

  • Conduct a thorough hazards and operability (HAZOP) study;
  • Verify the layers of protection analysis (LOPA);
  • Have a complete safety requirements specification (SRS);
  • Install new functioning hardware;
  • Install new tested software,
  • Conduct regular proof tests;
  • Train world-class operators; and,
  • Use engineered trip setpoints or process delay time”  That would save all of us a lot of headaches in the future.

"However, you have to be careful here as well because you can negate an SIF because you haven't selected the right trip setpoint," said Laskowski, who further recommended adopting a three-part safety lifecycle approach. The first part, Analysis, includes performing a process hazard and risk analysis, allocating safety functions to protection layers and drafting a SIS safety requirements specification. The second part, Realization, includes designing and engineering the SIS; building, integrating and factory acceptance-testing it; installing and commissioning the SIS; and validating it. The third part, Operation, includes operating and maintaining the SIS; modifying it as needed; and decommissioning it at the end of its lifecycle.

"Unfortunately, safety lifecycles can fail when all initiating causes aren't identified, such as when all fuel sources and thermal oxidizers aren't identified," added Laskowski. "Likewise, during overfills, all inlet lines need to be identified as closing on high level, not just big lines. Also, loss of utilities like power, steam, cooling water and instrument air can lead to initiation and need to be identified. Finally, other consequences may have been under or overestimated."

To seek a stable safety lifecycle, Laskowski also suggested implementing an "interaction matrix," which lists all raw materials, end products and other materials and equipment in a process application on an X-Y axis, and then cross-references their potential interactions with each other. "If two of these materials come in contact they could decompose, polymerize or become flammable," said Laskowski. "After one big explosion, the affected R&D department said it hadn't reported that the two materials involved could possibly explode because they were never supposed to be heated. In fact, they were cooled in this process. However, during start-up or shutdown, they did become heated, and that caused an accident."

"Many independent protection layers (IPLs) aren't as independent as they're described," Laskowski continued. "One research study reported that 44% of failures are engineered into their application's specifications; this is why it's important to validate your LOPA early. Further up in the process stream, the LOPA may not be as stringent, and the IPLs are as valid as they should be, and this little bit of wiggle room can cause some real problems. So users need to look at all possible modes of failure, and also do complete testing."

Bhojani added, "It's difficult to quantify direct project savings, but from a moral perspective, providing employees a safe workplace is the right thing to do, and it's also a legal requirement. Properly designed and operating SISs and other IPLs are fundamental to maintaining a license to operate a facility. This is why proper SIS lifecycle management is required; they must be designed, operated and maintained correctly. This can be best addressed by auditing projects and facilities, and will reduce the user's total cost of ownership. It's better to have fewer well-managed IPLs than numerous unmanaged ones."

About the Author

Jim Montague | Executive Editor

Jim Montague is executive editor of Control.