kubeDay2
kubeDay2
kubeDay2
kubeDay2
kubeDay2

Best Practices in Securing Process Automation Networks

April 9, 2008
Maturing Practice of Evaluating and Managing Control System Risks.

Once upon a time, began Nate Kube, CTO of Wurldtech Security, process control networks were isolated or “air-gapped” from enterprise networks; they used proprietary communications and products. Differing technologies prevented cross-pollination and there was zero perceived risk by stakeholders of outside intrusion.

“You are not secure because you did vulnerability analysis, and you are not secure because of the latest patch.” Wurldtech’s Nate Kube discussed the maturing practice of evaluating and managing control system risks.

“Today, however, we see rapid adoption of industrial Ethernet and distributed control systems based on open networks,” Kube said. “There is interoperability, and real-time data access is a key functionality. There is an increase in the frequency, severity and complexity of cyber threats. Security in the process industries is not well understood, and the ownership of security issues is unclear.

“Tomorrow, we’ll see industrial organizations outsourcing security solutions. We will see the emergence of sophisticated, multi-vendor ‘blended threats.’ And without IC-specific data, threats will bypass legitimate openings in perimeter devices like firewalls and IPS/IDS. Security,” he added, “will be a major product differentiator.”

“We must first understand the risk and then implement and maintain solutions,” Kube said, reducing control system security to its simplest terms. “IT governance and assessment methodologies do not expose shop floor risk, and in the absence of clear risk data, companies are often making critical mistakes—either doing too much or doing too little—and current vulnerability test methodologies generate too many false positives and negatives.”

Industrial cyber security, like safety, is best quantified by the impact of losing control of a given process, Kube continued. “Vulnerability discovery and disclosure have always had an element of cloak-and-dagger and contribute to a ‘hair on fire’ syndrome. Every time a new vulnerability comes out, a new patch is required, generating an alarmist response, which is nothing more than chasing your tail.”

“The key is to understand the resilience of each device and make it more so,” Kube continued. “Device testing and vulnerability analysis can complement each other to understand the exploitable threats against the device.”

Engineers design with the intended purpose of the system in mind. They consider failure in terms of entropic events, such as safety or machine failure, where the other compensating controls maintain a safe state. “As engineers, we assume that the process will fail safe,” Kube said. “Hackers, on the other hand, don’t just send one bad packet—they research for weeks or months. Intentional attackers will bypass safety systems and other systems.”

Kube discussed a case study where the CIO of a power plant asked Wurldtech to attack it, even though it had just passed several NERC CIP audits. In four hours, they disabled all generation systems, caused erratic behavior on sensor networks, interrupted communications to the historian and to the shift office and real-time traders, demonstrated a theoretical attack against dam spillway management, disabled the firewall and demonstrated numerous malware- and virus-based attacks.

“We need to develop models that are based on threat vectors, not on threat agents,” Kube said. “In other words, we need to figure out what can happen and prevent that, rather than try to figure out who might attack. We need vulnerability analysis, robustness and resilience testing for industrial environments. Then we can develop and implement mitigation strategies against these broad classes of threats.”

The benefits of doing it this way are significant, Kube contends. Greater upfront design costs are offset by fewer touch points later in the life cycle. It gives the owner greater confidence in the installed architecture. It creates improved performance of the overall system, not just from a security standpoint, and sets the stage for benefit-driven security. Finally, it resembles safety-driven models, with which the controls industry is quite familiar.

Unfortunately, this model also requires a level of device testing that is not yet commonly in use. Operators and owners can either "roll their own" approach or wait on the ISA99.04 standard and the Industrial Security Compliance Institute (ISCI). “The models under development do not address every vulnerability,” Kube said, “nor could we, because of the fundamental limitations of device testing. This technique should address security in terms of design constraints and how to design a resilient process. It is not a guarantee of no-fault, but it does provide a robust system that is fault-tolerant. This model still requires business continuity and incident response plans.”

"In summary," Kube said, "you are not secure because you did vulnerability analysis; you are not secure because of the latest patch. Understanding your risk level requires a greater level of understanding than is often available. The emerging trends in security are towards system resilience over patching. Resilience testing is essential to identifying risk, and it can be implemented by anyone today.”