When my son stopped over to help us with some chores around the house, he popped open the hood of his 2001 Toyota Corolla. I asked about his engine oil, and he informed me that it wasn't due for an oil change, and his low oil pressure indicator light wasn't coming on. Ah, the lowly oil pressure switch, which I was taught to interpret as "pull over now because your engine is about to self-destruct," is a lot to trust when you're on a limited income like my college-student son. When I pulled the Toyota's dip stick, it was completely clean. More than two quarts low, my son needed a reminder about using other "measurements and diagnostics."
Back in the plant, evaluating an instrumented system, i.e. an interlock serving as a vital layer of protection, there is more than one path to achieve the required safety integrity level (SIL). When we chose to employ two automated valves in series as the final control elements, our layer of protection analysis (LOPA) and SIL consultant recommended an improved, later-revision transmitter to detect the condition of concern. The newer transmitter had better diagnostic coverage.
Safety instrumented function (SIF) analyses educate us about the veracity—the truthfulness and trustworthiness—of the measurements used in automation. A simple limit or pressure switch offers us little in the way of data validation; a loose or broken wire looks the same as a vote to trip. If there's a short circuit or a defective switch, the hazardous condition will never be revealed. If the red light doesn't come on, the engine is transformed into a smoking hunk of scrap metal. When the consequences and hence the budget allows, we much prefer a continuous (analog) indication to a simple on-off switch. We can see a gradual decrease in pressure, and we can see evidence that the measurement is flat-lining or faulty—by nature, its failure modes are more self-revealing.
The degree to which a fault is self-revealing reduces the probability to fail dangerous (PFD) of the measurement. When we duplicate or triplicate transmitters and make them vote for a trip, we can configure another simple diagnostic—a deviation alarm. In a two-out-of-three voting architecture, a measurement drifting away from the other two by more than a certain tolerance can alert us to an impending failure and an otherwise-unseen degradation of the protection system. But if the alert or alarm goes off, and it's simply silenced—no one takes action, no work order is written—we're only as good as the "degraded" system's integrity level.
So, when our plant upgraded three transmitters because the increased diagnostic coverage was needed to achieve the required SIL level, did we bother to see what those diagnostics were? What if we just assumed they were SIL-capable out of the box? As Herman Storey, chairman of the ISA108 committee for intelligent device management, pointed out in his paper presentation at Emerson Global User Exchange, if you take credit for diagnostic coverage but have no programs or procedures in place to monitor the diagnostics, report on them, and take corrective action, then your installed, achieved SIL level is probably less than what your calculations claim.
Most of our plant's SIL calculations assume a 72-hour mean time to repair (MTTR), an allowance that we probably exceed if we never look at any diagnostics. An alert can come and go, and we'd miss it without any monitoring system or discipline. If the asset management system (AMS) gathers dust for weeks, the transmitter might as well be a pressure switch.
My son needed a new procedure to check the oil in his old Toyota to avert a seized engine. Likewise, safeguards that rely on high diagnostic coverage need systems and procedures in place to act on diagnostics in an intelligent fashion. If your interlocks' desired SIL level hinges on high device diagnostic coverage, put the procedures in place to utilize them.