While innovation happens quite frequently in process automation, safety is sometimes “out of scope.” There are many reasons, but one primary reason is simply that for many operators, trusting artificial intelligence and machine learning to do the jobs that humans historically handled can be a scary proposition. And, with barriers such as cybersecurity concerns and a daunting amount of software tools from which to choose, the prospects of handing over security and safety in process applications sometimes ends up pushed to the background. However, even if there is reluctance toward working with software for process safety applications, tomorrow it will be a reality. Chris Stogner, the Triconex safety and critical control leader at Schneider Electric, talked with Control’s editor in chief, Len Vermillion, on a recent episode of the Control Amplified podcast to discuss the barriers to adoption and how industry can overcome them.
Q: While innovation is taking place in process automation, safety tends to be out of scope. Why, and how does it affect the adoption rate of software tools?
Among the barriers to adopting new innovations is a fear that they will cause the safety system to fail. Also, safety is not always in the forefront of people's thought processes when they talk about potential modernization. Safety is still not thought of as a driver of productivity. It's really considered a necessary evil, kind of like having car insurance. However, there is much evidence that shows companies with good process safety practices are ultimately more profitable.
A DCS is interacted with as part of daily plant life. Operators sit in front of DCS consoles all day. They're monitoring what's going on in the process, what's happening in vessels, how much product flows through pipes, and ultimately measuring the productivity of the plant. The safety system is the black box that nobody thinks about, and it may not get the attention that it should. So, should safety be out of scope? Perhaps not always.
Q: Another barrier is trust. People are afraid to apply new technologies to their safety systems. So why is there this uneasiness about trusting new software tools for safety?
A: It goes back to a fear of implanting something to safety which is not considered “proven in use.” But sometimes “proven in use” philosophies can make us less safe if it prevents us from bringing on innovations that can make our plant safer, make our safety functions operate more properly. There is a lack of trust needed to turn over to a tool what we think a human should be doing. Technologies like artificial intelligence can be scary, and it scares me in some ways, but it can be put to productive use.
Imagine a world of autonomous vehicles. Let's just say that everyone had autonomous vehicles and they worked properly. You would almost eliminate anyone ever dying in a car accident again. So, the adoption of this technology would make us safer. In the same way software tools can make safety systems work better. For example, we have a software tool that automates the testing of logic. Traditionally, logic is tested manually and is a time-intensive process. But customers who put this software to use have seen enormous benefits. We had one user who started their plan up four days ahead of schedule, which is a huge productivity boom. Another had 250-man days off their test effort, which is a big cost reduction. Most important of all, it makes them safer. Because software can do in minutes what it takes weeks for a human to do a much greater test coverage can be achieved which greatly reduces the likelihood of a covert fault being deployed in the field.
Q: Cybersecurity is on everyone's minds these days. How does this risk present a barrier to adoption for these kinds of tools?
A: Whenever I present tools or if we talk about doing something in the cloud or any other types of apps or analytics, I immediately get asked, ‘Is it secure? How do you make it secure?’ These are legitimate concerns but shouldn’t be a complete roadblock.
Cybersecurity really parallels process safety. Many of the processes that exist inside production facilities have the potential to have some type of catastrophic event. But the products that these companies bring to the world make our lives better. They bring us value. As human beings, we accept the risk associated with these processes because they make our lives better, but we don't just ignore the risk either. We identify the risk, and we do what we can to minimize these risks. Protection layers such as safety instrumented systems are implemented to enable these products to be produced in the safest possible manner.
Software applications and analytics tools have the potential to increase both productivity and safety, but they do require us to make data available. Sometimes it requires us to open more connections and that does create a new cyber-risk. I would be lying if I said that it didn't, but if the value that's provided by these tools is worthwhile, we accept this potential cyber-risk. But just as is the case with process safety, we're going to manage these risks and put in protection systems around them.
Q: How do you manage those risks?
A: When you design a safety instrumented function for you assign protection layers. Every layer has a few little holes in it—the potential failures, commonly depicted by the Swiss Cheese model. The likelihood of a catastrophic event occurring is a mathematical probability of all the holes and layers lining up. Whereas with cybersecurity, there's a bad actor trying to intentionally find where the holes are and make them line up, but securing a system is possible. Following the safety vendor’s security guidelines is a good first step.
Many companies conduct cyber-HAZOPs where risks are identified, and protection layers defined. These can include methodologies such as using zone and conduit networking with firewalls, data diodes, or any other type of protective device. What it ultimately does is make information available so that insights can be gathered without providing direct access to a safety controller itself.
Q: Even if people are reluctant to use software for process safety, they're going to need to in the future. Why?
A: I think they must, whether they realize it or not. The world's energy needs are growing. The population of the world is still growing. New plants are being brought online, existing plants are being expanded, and the industrial workforce is being reduced as more people retire than enter it. The Wall Street Journal reports that since 2014 the oil and gas industry has seen a 75% reduction in undergraduates who are majoring in petroleum engineering.
People will have to do more with less and they're not going to be able to do it unless they adopt new ways of working. To make sure safety can be maintained, there must be this adoption of software, not to replace humans, but to augment them and be able to manage safety in a way they currently don't have the bandwidth or the capacity to do. It's going to make people more productive. At the end of the day, the adoption of these tools will make sure that the plants keep running and everyone goes home safely.