Having prepared the Instrument Engineers' Handbook for some 50 years, I observed that automation can be both helpful or harmful, depending on how it is used. If ignorant programmers are allowed to prepare fancy software that operators do not understand, but are told to trust "what is in the box," this excessive dependence on something that can be wrong in the first place can create a mess. On the other hand, if we understand the proper role of automation, it makes our industries better and safer! The key is to clearly understand what I call overrule safety control (OSC).
During the past few years, I studied seven major accidents and found that the main cause of one was bad design, the cause of another one was operator inaction due to excessive dependence on automation, and five were caused by various degrees of manual operation of the process without OSC. One example of this "manual operation" was at Three Mile Island when the operator sent water into the instrument air supply, and for hours nobody even realized what had happened. This culture, in an age of poor training and potential for terrorism, needs to change.
Understanding what OSC is is critical! OSC is like a rail barrier. On the one hand, it does not prevent the driver from visiting his mother-in-law, but it does prevent him from causing an accident by trying to get there to taste her excellent cooking too fast. OSC is like automatically keeping the vehicle's doors closed when it is moving and preventing the driver (the operator) from "overruling" that safety automation. OSC is the "red line" that neither the manual operator nor the autopilot must be allowed to cross.
So why is OSC absolutely safe?
- Because it overrules not only the unsafe actions of the driver, but also those of the autopilot. In other words, OSC is totally independent of either, and it overrules both! It overrules all unsafe instructions, regardless of whether they come from the operator or from the computer.
- Can the OSC fail? Naturally it can, even if it has triple-redundant backups that are using the very best sensors. Yes, it can.
- But, if the OSC becomes inoperative for any reason, both the operator and the autopilot continue functioning just as if it did not exist. It is like the safety locks on the car doors or the red light on the street corner. If it fails, you are simply back to normal control.
So what does this mean for Air France 447? It means only two things:
- Bad sensors should not be used. Pitot tubes can freeze up, static pressure altimeters can give false information when air density changes (cold fronts, etc.) So forget such ancient sensors, and use redundant radar with GPS backup.
- OSC must be on all the time, no matter if the autopilot drops out, and no matter how ignorant or careless the pilot is or what he believes the autopilot is doing. OSC simply prevents both the pilot and the autopilot to attempt landing at unsafe speeds.
In the broader sense, our process control professionals must have a total understanding of the processes they control, must totally separate OSC from regular operational controls, and during the design phase, they must also control the software developers and not the other way around!
For examples of my proposed OSC designs, you can refer to my previous articles about eliminating the possibility of nuclear accidents by using automated underwater nuclear power plants (February 2014, bit.ly/1gLErMK), or you can read my article in the November 2013 issue (bit.ly/1r9s95F) about how OSC would have prevented the BP accident.