While innovation happens frequently in process automation, safety is sometimes “out of scope.” There are many reasons, but one primary reason is it can be a scary proposition for many operators to trust artificial intelligence (AI) and machine learning (ML) to do jobs that humans historically handle. And, with barriers such as cybersecurity concerns and a daunting amount of software tools to choose from, the prospect of handing over security and safety in process applications sometimes ends up being pushed to the background. However, even if there’s reluctance to work with software for process safety applications, tomorrow it will be a reality.
Chris Stogner, Triconex safety and critical control leader at Schneider Electric, talked with Control’s editor-in-chief, Len Vermillion, for a recent episode of the Control Amplified podcast to discuss the barriers to adoption and how industry can overcome them.
Len Vermillion: While innovation is taking place in process automation, safety tends to be out of scope. Why, and how does it affect the adoption rate of software tools?
Chris Stogner: The world of automation, in general, is going through the biggest metamorphosis since we started replacing relay-based controls with PLCs and DCSs. New technologies such as big data, ML, open automation and cloud-based control are going to make future control systems look much different than today. However, when these new initiatives are discussed in the context of safety, it’s true more often than not that safety is “out of scope.” While there’s much validity to this way of thinking, we must be open to applying these new technologies to help make safety systems more effective.
Among the barriers to adopting new innovations is a fear they’ll cause the safety system to fail. Also, safety isn’t always in the forefront of people's thought processes when they talk about potential modernization. Safety is still not thought of as a driver of productivity. It's really considered a necessary evil, kind of like having car insurance. However, there’s much evidence that shows companies with good process safety practices are more profitable.
A DCS is interacted with as part of daily plant life. Operators sit in front of DCS consoles all day. They monitor what's going on in the process, what's happening in vessels, how much product flows through pipes, and ultimately measure the plant’s productivity. The safety system is the black box that nobody thinks about, and it may not get the attention that it should. So, should safety be out of scope? Perhaps not always.
Len Vermillion: Another barrier is trust. People are afraid to apply new technologies to their safety systems. So why is there uneasiness about trusting new software tools for safety?
Chris Stogner: It goes back to a fear of implementing something for safety which isn’t considered “proven-in-use.” But sometimes proven-in-use philosophies can make us less safe if they prevent us from bringing in innovations that can make our plant safer and make our safety functions operate more properly. There’s a lack of trust in turning over to technology what we think a human should be doing. Technologies like AI can be scary, and it scares me in some ways, but it can be put to productive use.
Imagine a world of autonomous vehicles. Let's say that everyone has autonomous vehicles and they work properly. You’d almost eliminate anyone dying in a car accident ever again. So, adopting this technology would make us safer. In the same way, software tools can make safety systems work better. For example, we have a software tool that automates logic testing. Traditionally, logic is tested manually and is a time-intensive process. But customers who put this software to use have seen enormous benefits. We had one user who started up their plant four days ahead of schedule, which is a huge productivity boon. Another cut 250 man-days off their test effort, which was a big cost reduction. Most important, it made them safer. Because software can do in minutes what it takes weeks for a human to do, much greater test coverage can be achieved that greatly reduces the chance of a covert fault being deployed in the field.
Len Vermillion: Cybersecurity is on everyone's minds these days. How does this risk present a barrier to adoption for these tools?
Chris Stogner: Whenever I present tools or if we talk about doing something in the cloud or any other types of apps or analytics, I immediately get asked, “Is it secure? How do you make it secure?” These are legitimate concerns but shouldn’t be a complete roadblock.
Cybersecurity really parallels process safety. Many of the processes that exist inside production facilities have the potential to have some type of catastrophic event. But the products these companies bring to the world make our lives better. They bring us value. As human beings, we accept the risk associated with these processes because they make our lives better, but we don't just ignore the risk either. We identify the risk, and we do what we can to minimize it. Protection layers such as safety instrumented systems (SIS) are implemented to enable these products to be produced in the safest possible manner.
Software applications and analytics tools can increase productivity and safety, but they require us to make data available. Sometimes it requires us to open more connections, and that does create a new cyber-risk. I’d be lying if I said that it didn't. But if the value that's provided by these tools is worthwhile, we accept this potential cyber-risk. But, just as is the case with process safety, we're going to manage these risks and put in protection systems around them.
Len Vermillion: How do you manage those risks?
Chris Stogner: When you design a safety instrumented function (SIF), you assign protection layers. Every layer has a few little holes in it—the potential failures, commonly depicted by the Swiss cheese model. The likelihood of a catastrophic event occurring is a mathematical probability of all the holes and layers lining up. Whereas with cybersecurity, there's a bad actor intentionally trying to find where the holes are and make them line up, but securing a system is possible. Following the safety vendor’s security guidelines is a good first step.
Many companies conduct cyber-hazardous operations (HAZOP) surveys, where risks are identified and protection layers defined. These can include methodologies, such as using zone and conduit networking with firewalls, data diodes or other types of protective devices. What it ultimately does is make information available, so insights can be gathered without providing direct access to a safety controller.
Len Vermillion: Even if people are reluctant to use software for process safety, they're going to need to in the future. Why?
Chris Stogner: I think they must, whether they realize it or not. The world's energy needs are growing. The population of the world is still growing. New plants are coming online, existing plants are expanding, and the industrial workforce is shrinking as more people retire than enter it. The Wall Street Journal reported that the oil and gas industry has seen a 75% reduction since 2014 in undergraduates majoring in petroleum engineering.
People will have to do more with less, but they won’t to be able to do it without adopting new ways of working. To make sure safety can be maintained, there must be this adoption of software—not to replace humans, but to augment them and manage safety in a way they currently don't have the bandwidth or the capacity to do. It's going to make people more productive. At the end of the day, the adoption of these tools will make sure that the plants keep running and everyone goes home safely.