By Angela Summers, SIS-TECH
Following a careful review of a significant variety of product safety manuals, it appears that many field devices are achieving higher safety integrity level (SIL) claims than can be supported by process industry data. Appendix F.1.3 of CCPS Guidelines for Safe and Reliable Instrumented Protective Systems (2) states that “a sampling of data for pressure transmitters from various manufacturers report theoretical mean time to failure dangerous (MTTFD) values that are three to ten times better than owner/operator prior use data.” ―a claim that some manufacturers (3) have openly validated.
Unfortunately, changes now being considered by the IEC 61508 committee will not likely improve the situation. It appears that the committee is intent on piling on additional requirements instead of addressing serious structural weaknesses. The only sensible option is for users to take control of this situation by refusing to install any field device in a safety application that has not demonstrated its required integrity and reliability in a similar non-safety operating environment. Users should demand that manufacturers stop making exaggerated performance claims, manipulating the safe failure fraction (SFF), and shifting responsibility for safe operation to the production operator when products behave unreliably. Users must also demand that safety manuals provide complete proof test procedures that achieve compliance with IEC 61511 (5) and OSHA process safety management (PSM, 6) requirements.
The following sections highlight a few of the issues associated with safety manuals and their performance claims.
1. Exaggerated performance claims
Prior to the release of IEC 61508, many manufacturers provided in-service and accelerated test failure data. Following the approval of IEC 61508, manufacturers increasingly began claiming compliance based on a shelf-state analysis with seemingly perfect operating environment conditions. IEC 61508 allows manufacturers to make SIL claims based on predictive analysis without any burden of later substantiating the claims using actual field data, so technically manufacturers are not doing anything wrong. However, the theoretical dangerous failure rate, safe failure rate and probability of failure on demand (PFD) values declared in analysis reports are much better than can be achieved in actual field applications. The gap between the theoretical analysis and real world performance is egregious and pervasive.
With rare exception, these analysis reports do not provide enough information to fully illuminate the disparity between manufacturer’s claims and user experience―exactly the point being made by Thomas et al.(4) in stating “quality and consistency in safety manuals is lacking.” The analysis reports do not provide a boundary description, installation and configuration assumptions, or a failure modes and distribution listing. Instead, the reports provide a summary table of the failure class distribution. The issue this raises is that while failure modes and effects are product-related and can be independently evaluated by the manufacturer, the failure classification is application-dependent.
There are many ways that a field device can be installed and configured, making the failure classification difficult for manufacturers, especially for commodity products. A manufacturer cannot properly assess whether a failure should be classified as safe or dangerous without first acquiring knowledge of the intended application.
For example, in a typical demand mode operation where a solenoid-operated valve controls the pneumatic supply to a valve actuator, solenoid coil burn-out is safe in a de-energize-to-trip application and is dangerous in an energize-to-trip application. All failures of the solenoid operated valve are likely dangerous in a continuous-mode application.
Users should be provided with the failure modes and effects results, not just a failure classification summary. Armed with this information, the user can then classify the failures according to their intended application and calculate an application-specific PFD and spurious trip rate.
Most reports do not clearly define the analysis boundary or describe what is included or excluded from the analysis. For a variety of reasons, many in-service failures are excluded from the product analysis reports. Some failures are deemed to occur due to product “wear out” and excluded from the useful life analysis. Operating environment impacts, such as plugging, corrosion and electrical interference, are considered application issues that are the user’s responsibility to analyze and estimate. The restricted view of the product and its environment is a significant source of disparity between the theoretical analysis and real-world performance, but it is not the only problem.
Excessive diagnostic coverage claims are routinely made on programmable electronic field devices. Claims in excess of 90% are very common even with the restricted boundary and operating environment assumptions. A high diagnostic coverage translates directly into a high SIL Claim Limit and low reported PFD. That makes sense when the credited diagnostic actually yields safer operation and is periodically proven to work―the same rule applied to any safety device. Diagnostics must be verifiable and auditable.
Unfortunately, many manufacturer-supplied diagnostics are not capable of being tested in compliance with IEC 61511 Clauses 11.3―Requirements for system behavior on detection of a fault ―and 18.104.22.168―periodic proof tests shall be conducted using a written procedure to reveal undetected faults that prevent the SIS from operating in accordance with the safety requirements specification. Additionally, analysis reports do not include information on the product’s integrity, if the diagnostics are not configured per the safety manual or fail during operation.