I enjoyed the "50 Years" article in the September issue of Control, and I almost agree with everything Mr. Lipták said. My concern is that the conclusions are being drawn from looking backward to events that have occurred. It's easy to say what automation would have done had it been in place.
As I see it, the process (and automation) designer's challenge is to foresee abnormal situations that might occur in the future, and make provisions for them. Granted, not every abnormal situation can be foreseen, so the question becomes where do you draw the line? For those unforeseen abnormal situations that are on the other side of the line, there must be provisions for human (operator) intervention.
Now I'll move on to a slightly different topic, which is automated, driverless automobiles. Some manufacturers predict that they will be available by 2018. However, I predict that this will result in mayhem on the highways.
There have been stages of driver automation appearing for years, starting with elimination of the crank-to-start, automatic transmissions, cruise control, anti-lock brakes, etc. But none of these have removed the ultimate responsibility from the drivers. They must pay attention, so they can take over in event of an abormal situation. However, complete driver automation goes beyond this.
I was with a group of 20-somethings recently, and this subject came up. The concensus among them was that this couldn't come soon enough. Now they would be able to set their destination on a GPS, and the automobile would do the rest, leaving them free to do other things. They assume that if a ball suddenly rolls out into the street, the automated car will anticipate that a small child will likely follow and will take the proper action.
Balderdash! If they were in command of the vehicle, they would have only seconds or fractions thereof to make an intelligent decision as to what action to take.
I doubt if the National Highway Traffic Safety Administration has even begun to consider this problem, nor have any of our state regulatory agencies.
Should we, as a group of automation and safety experts, be expressing our opinion on this subject now? Or are we going to wait for the inevitable 'arms race' among vehicle manufacturers to add more and more levels of automation, considering only the additional profits to be earned, and in the absence of any intelligent (?) regulation?
I say, if you like Microsoft Word, which is always trying to guess what you want to do next and do it for you, and almost invariably guesses wrong, then you will love automated driving.
[Béla Lipták responds.]
I completely agree that there should be no interference with the operator's ability to respond to unforeseen events, including those that are coming which will be caused by cyber and other forms of terrorism. However, what about the situation when the operator is the terrorist or just stupid or asleep?
I said that safety automation is the airbag of industry. Airbags do not interfere with the operator's actions, but respond automatically to evolving disasters and can't be turned off by anybody. From BP, Fukushima, etc. accidents, we learned what are some of these essential airbags for those processes, and we should use that knowledge to prevent anybody from repeating them and prevent anybody from turning them off.
As to self-driving cars, I see nothing wrong with adding more "airbags" to our vehicles. I see nothing wrong with preventing the driver from going through red lights, exceeding safe speed limits or coming too close to the car in front of the vehicle, etc. Automatic parking and the rest can come later.