Faulty sensor data, automation software cited for Boeing 737 Max crashes

While getting ready to go to the office the other day, I popped on The New York TimesThe Daily podcast, as I normally do. Admittedly, I had been pretty ignorant of the October Lion Air Flight 610 crash, but I was curious about the plane that crashed last week in Ethiopia. Lucky for me, The Daily covered the incident in an episode titled “Two Crashes, a Single Jet: The Story of Boeing’s 737 Max.”

As I listened to the in-depth discussion of background information and current news, it became more and more apparent that these two crashes are a prime example of why an operator needs a complete and ongoing understanding of the automation system they’re controlling, especially as new digital elements and software are added.

When it comes down to it, bad sensor data resulted in an automated correction that had disastrous results. According to a Washington Post article titled “Sensor cited as potential factor in Boeing crashes draws scrutiny,” the plane’s angle-of-attack sensors, which monitor the lift on the wings, were likely malfunctioning, at least in the Lion Air Flight. Washington Post reports that these sensors are now getting new scrutiny after the Ethiopia crash.

The Daily explained that with when the faulty data was delivered to anti-stall software, the software detected the need to automatically adjust the nose of the plane. However, the pilots, or operators, weren’t aware of the anti-stall software and its corrective action. Thus, when the software analyzed the faulty data, it took over to adjust the angle of the plane’s nose, doing exactly what it was designed to do. It seems that each time the pilot tried to take corrective action as he had been previously trained to, the software doubled down on its correction.

We have dedicated a lot of words to helping operators determine the appropriate corrective action for various processes. That’s because we know continuing to expand your knowledge and understanding of the many ways a process can go wrong, especially as new elements come into play, is essential to not only the success of a company or the production of a product, but sometimes even to keeping people alive.

Could the sensors be at fault? They could be, I’m certainly not expert enough to make that determination. However, it’s clear that had these operators been aware of and trained on the background software, they might have been able to properly compensate for the faulty sensor data and override the software.

This is a grim lesson for why we must continuously train operators as new software, sensors and components are added to any process.