little-dog-next-to-a-big-dog

Transparency, supervision still needed for mission-critical AI

June 14, 2022

With artificial intelligence (AI), machine learning, digital twins and soft-robots inserting themselves into most of the systems we interact with today, it’s important to realize that, while logic-enabled, pattern-recognizing algorithms demonstrate some aspects of intelligence, they’re not yet a panacea for mission-critical decision-making.

According to Pete Diffley, long-time veteran of process manufacturing’s front lines and current global partnerships lead for VTScada by Trihedral, such algorithms are the right tools for many jobs, and can bring new, automated efficiency to formerly manual tasks. But in some situations, unforeseen biases can skew results in a number of unanticipated directions. Control sat down with Diffley to discuss some of the sources of these biases, and what steps can be taken to ensure that our mission-critical systems don’t lose their way.

Q: Are the terms AI and machine learning being overused?

Pete Diffley

Senior Manager, Global Partnerships, Trihedral Engineering

A: Especially in a broader, non-industrial context, if any tool or application includes logic in it these days, someone, somewhere is trying to say, “That’s AI or that’s machine learning,” and usually it isn’t either. Often, it’s a sensor and an output related by a simple algorithm. It’s automation. AI and machine learning are real phenomena, of course, and aspects of intelligence are creeping into all kinds of systems, but real intelligence is much bigger than an algorithm with a bit of logic attached to it. Algorithms that appear to “learn,” which can adjust their response based on the data sets they’ve seen—their “experience” of the world—are I think one of the reasons that the term AI is so overused. We identify learning with entities that have intelligence, be they people or guinea pigs.

Q: Bias is another very anthropomorphic term that affects not only people but also the AI or machine-learning algorithms they create. Can you talk a bit about the sources of bias that can assert themselves?

A: Bias can be introduced at the very start, when it comes to selecting the best algorithm for a particular application from the proliferating number of commercially available and open-source options available. Familiarity or inclusion of a particular algorithm in one’s toolkit may well exclude other, more suitable alternatives. Algorithm training presents another opportunity to introduce bias. If the data sets presented to the algorithm don't represent the full range of expected process conditions, the algorithm may find itself on uncertain ground. Imagine training an algorithm to recognize dogs, but only using Chihuahuas. Will it mistake the first Great Dane it sees for a horse?

AI or machine-learning systems are also subject to the same sort of selective blindness that affects humans at times. For example, if I present a data set to person consisting of the numbers 2, 4, 6, 8 and 10, a logical person will likely jump to the conclusion that the data set is incrementing by two. But what if it’s just the set of positive integers that 5 and 7 also belong to? ? As humans we tend to seek out patterns, so we can process things more quickly. An algorithm—like a person—may errantly identify a pattern that then becomes hard to “unsee” in the data.

Q: What steps can be taken to minimize the possibility of AI or machine-learning systems acting in unpredictable ways?

A: A number of high-profile incidents have demonstrated just how important it is that the decision-making criteria of these systems be well-understood by their human operators. It’s obviously not a huge issue if Amazon gets its purchase recommendations wrong, but when we’re talking about mission-critical systems—say, a highspeed turbine or an offshore oil rig—the stakes are much higher.

The crashes that resulted from problems with the Boeing 747 Max 8 flight control system, for example, were at least partly to blame on a poor understanding of how the system worked by the pilots. The pilots were confused and didn’t understand what steps the system was taking or why. The lesson to be learned from this is that you can’t operate mission-critical AI as a black box—at least not yet. Human operators need to have full transparency into the criteria the algorithm is using to make a particular decision or recommendation.

In another aircraft example with a happier ending, the triple redundant pitot tubes for airspeed measurement on an Airbus model had to be covered while on the ground to keep the local bee population from making themselves at home in them. Somehow, multiple work processes and inspections by numerous critical personnel failed to ensure that the caps were removed before take-off—confirming a human tendency toward selective blindness. And as the plane accelerated down the runway with caps still in place, the plane’s systems alerted them that the plane had no airspeed measurement. In this case, the plane’s systems, presuming a fault in the airspeed sensors, guided the pilots to its GPS readings instead, also using stresses on the wings to estimate the plane’s speed. Turns out, the plane didn’t need airspeed indication via the pitot tubes—it had other redundant systems that could fill in the gaps. Fortunately, they successfully landed the plane and marked an incident, not a tragedy.

These two examples show that nothing is totally foolproof and that even experienced personnel can become blind to things that should be obvious in certain situations. A friend of mine that has an amazing solution in the cybersecurity sector rightly reminded me that it’s all about framing when it comes to algorithms and AI. Knowing when it’s appropriate to use it, the maturity level to which it's adopted, and the type of algorithms being used. It's also essential to not allow “incremental creep” to set in over time. Something can be deviating minutely over days, weeks, months or even years, and wind up a long way from where intended. Having a frame of reference of where you are versus where you thought you should be is extremely important, especially with AI.

Q: What lessons can process manufacturers and other operators of mission-critical infrastructure learn from these examples?

A: I'd say the biggest takeaway it's not leaving AI or machine-learning algorithms to operate as black boxes with everything hidden inside. You can keep saying “train the system,” but there’s still unavoidable bias. Have the system share the logic it used to reach a particular decision. That's transparency, and of course it’s complicated to deliver. That's why you don't get it often.

AI decisions themselves aren’t necessarily any better—or worse—than those made by humans. People often make decisions based on the wrong way of looking at data because everything is open to interpretation. When operations are mission critical, both systems and people need to be able to say, “I'm basing my decision on these factors.” Then someone should be in a position to challenge it, and maybe AI could challenge that in turn. So, perhaps you have several layers of AI. One layer disputes the conclusion of another layer, followed by yet another layer of artificial arbitration—which now and then is presented to a human, who checks in and says, “Okay. We haven't quite reached the Skynet singularity."