Voices from Safecomp 2009

Sept. 16, 2009

[img_assist|nid=3154|title=Safecomp 2009 Programme|desc=|link=none|align=left|width=226|height=320]Here are some notes from several of the talks at Safecomp:

In one talk, data was presented that indicates that decision support systems (whoops! all the rage among automation vendors at the moment) can seriously degrade the performance of operators:

Why are people's decisions sometimes worse with computer support?

[img_assist|nid=3154|title=Safecomp 2009 Programme|desc=|link=none|align=left|width=226|height=320]Here are some notes from several of the talks at Safecomp:

In one talk, data was presented that indicates that decision support systems (whoops! all the rage among automation vendors at the moment) can seriously degrade the performance of operators:

Why are people's decisions sometimes worse with computer support?

good computer support does not always improve human decisions
help may even systematically degrade human performance
more often, it may degrade decsions on some categories of situations and/or by some operators

automation surprises have been known for a long time and yet new systems and coccurrences may hold surprises

automation bias research-- perturbation of reasoning due to automation (decision support)

operator may be dependent on automation and vulnerable to its errors: takes unnecessary action because "the computer raised an alarm" or fails to take action because "the computer says everything is okay."

users grow lazy
novices overestimate the automated tool
comuter support comes with greater workload

there are more possibilities...including abdicating responsibility to the tool.

perhaps the user's "self tuning" (for false alarm rate) is responsible and it probably cannot be "turned off."

In another talk, the speaker pointed out that often the result of a successful violation of safety rules is the belief that it is okay to continue to violate that rule, and then such violations get embedded in the ad-hoc expert system that operators build in their heads. He pointed out that this very issue was the root cause of the Chernobyl nuclear accident. In fact, this was the root cause of the BP Texas City accident, too since the Baker Report noted that they had done the exact same procedure at least 16 times before.

A discussion of PFD and some other ways to calculate SIL:

PFD-- probability of failure to perform its design function on demand (low demand mode)

PFH--probability of a dangerous failure per hour (high demand mode)

Both quantities have their use and both are used to define a safety integrity level (SIL)

The PES (programmable electronic system) is an additional safety device put on top of the system.

safety-related systems are necessary to reduce risk, which is given as damage (number of fatalities, injuries, material losses) per time unit. This risk must be compared with the PFD...

THR-- tolerable hazard rate

IR=THR*Pa*Pi where

IR=individual risk
Pa=accident is prevented
Pi=the specific individual is killed in the accident

and the system is continuously in use...

now, according to PFD philosophy

IR=(greek character lambda)*PFD*Pa*Pi

In order to fulfil both requirements,

THR=(lambda)*PFD

IEC 61508 considers only the PES approach.

When using the PFD, care is needed. The safety integrity level can change under different circumstances. In EN 50129, PFD is not present and it makes the use of these standards simpler.