CT2106-Feat3-cybersecurity4

Break the crisis-cybersecurity cycle

June 25, 2021
Intrusions and cyber-attacks aren't letting up, but plentiful protections are effective—if they're deployed and maintained

The recent JBS meat-packing and Colonial pipeline ransomware attacks and shutdowns are grabbing all the headlines. However, they're just the latest in a long and probably endless parade of cybersecurity crises. Of course, the low points for the process industries include Stuxnet in 2010, Triton in 2017 and several others over the years, along with a litany of named malware threats and related incidents. They haven't been as numerous (or well-publicized) as breaches in the mainstream IT and consumer realms. However, they're all symptoms of computing, software and networking inexorably driving formerly separate technical disciplines, users, organizations and businesses into closer proximity than at any time in their respective histories.

After every attack and crisis, the question for process control operators, engineers, managers and everyone else is, "What do we do now?" Luckily, the answers are plentiful and remain the same, too.  

It's become a given that multiplying network connections and high-speed communications via Ethernet and the Internet are accompanied by multiplying vulnerabilities and increasing risk of cyber-probes, intrusions and potential attacks. The good news is that effective cybersecurity hygiene and best practices (like turning on passwords and segmenting networks) have remained remarkably constant, and been joined by new tools and methods (network monitoring and anomaly detection) that have also been unwaveringly useful—as long as they're uniformly applied and consistently maintained. 

Unfortunately, most breaches and attacks occur when basic cybersecurity isn't implemented or is neglected—in short, humans not doing what they should be doing.

Long-distance defenses

Just as more network connections create more vulnerabilities, adding increasingly remote processes to networks can also create more avenues for intrusions and possible cyber-attacks.

Spanning 1,850 kilometers across Turkey, the recently completed Trans-Anatolian Natural Gas Pipeline (TANAP) supplies more than 5% of Europe's natural gas, and relies on ABB's Process Automation Division for its control infrastructure, security and telecommunications. ABB, in turn, uses Skkynet's DataHub software to support its secure, redundant communications of real-time and alarm data. To move 16-30 billion cubic feet of NG per year, TANAP uses four metering stations and two compressor stations connected to a main control center. It monitors and controls operations and equipment including leak detection, and stores and transmits data between the remote stations and the control center (Figure 1). 

Hundreds of thousands of data points and values are sent via OPC DA and OPC A&E protocols, and tunneled by DataHub over the pipeline's redundant, fiber-optic network with VSAT satellite backup, and integrated with TANAP's SCADA systems. Because it isolates OPC links from the networked tunnel connection, DataHub can just as easily transmit OPC A&E alarm data across the tunnels as OPC DA real-time data. Since it supports OPC server-to-server bridging, DataHub can also link OPC servers at the control centers with servers in the field. In addition, DataHub's built-in redundancy support lets ABB's team provide TANAP with a reliable system using multiple redundancy layers. 

"Almost every DataHub function gets used here," says Sam Harrasi, project technical lead and PCS engineer at ABB. "We're using the script capability to monitor the health of DataHub and connected software across the pipeline, not only for our own systems, but for other suppliers as well. Code in each DataHub application on ABB's side sends out a continuous 'heartbeat' and listens to heartbeats for all the servers, including ABB and others. And, we've created a diagnostics webpage that lets us see all DataHubs at once across the entire pipeline."

Andrew Thomas, CEO of Skkynet, adds that, "The trend toward remote plant access has grown from individual remote desktop (RDP) connections to RDP over virtual private networks (VPNs), to multilayer RDPs, and then to data aggregation on central servers. With COVID-19, the need for remote access to plant data has become even greater as secure remote access has evolved from a matter of efficiency to a critical necessity. Outdated mechanisms like VPN and RDP have been pressed into service, even though they're not ideal for cybersecurity. Any computer exposed to the Internet is dealing with a constant stream of probes, intrusions and attacks. Even in 2015 it was estimated that a computer newly connected to the Internet was attacked within seven minutes. The obvious advice is not to expose a computer to the Internet at all, particularly if that computer is part of a sensitive control system. Secure remote access to data in systems that are isolated from the Internet poses a challenge that requires innovative approaches and protocols."

Managing cybersecurity is a broad topic covering everything from network security to personnel training, so Thomas reports that Skkynet concentrates on giving users access to data without exposing their control networks. "VPNs are directly or indirectly exposed to outside networks, which is not ideal. Users know that zero-day exploits and phishing attempts can compromise their networks, so our approach is to avoid exposing networks and plants at all, and to reveal only the data needed by remote users," says Thomas. "However, any time there's an authorized way into a network, for example via OPC UA or RDP, there is also a potential for unauthorized access through exploits in those same systems. The answer is, if access to a process or plant network is not absolutely necessary, then the plant should be completely isolated. It's not good enough for somebody to claim they have a good cybersecurity product and then require an open inbound port in the plant firewall—the whole notion is disingenuous; one open firewall port is one too many. Our message is simple: don't open any ports if you don't want to be attacked. That's possible with the right software and protocols."

Instead of exposing a process-control network, Thomas explains that remote users should only have access to the data. Skkynet's DataHub software, its open DHTP protocol and its Skkyhub cloud-computing service let process data reach a DMZ, and stream data to whoever needs it without exposing the inner network itself. "This is similar to publish-subscribe protocols like MQTT. However, MQTT has a problem because it can't do a second hop after reaching the DMZ broker, and only guarantees quality of service for one transmission, so there's no way for a PLC to know if a SCADA system got its message. And, if a message is dropped, the PLC and SCADA system won't notice because MQTT can't resynchronize or recover it. However, DataHub can make multiple hops, and preserves the quality of service regardless of how many hops are required. This is critical for secure network topologies involving a DMZ. Transmitters and receivers using DataHub stay synchronized with delays only slightly higher than network latency."

About the author: Jim Montague
About the Author

Jim Montague | Executive Editor

Jim Montague is executive editor of Control.