Protecting Our Cyber Infrastructure

Cyber-Covering Your Assets Is More Than Fending Off Terrorists, and That’s Not Good News

Share Print Related RSS

By Walt Boyes and Joe Weiss

“We have information from multiple regions outside the United States of cyber intrusions into utilities, followed by extortion demands,” said CIA executive Tom Donahue, in a written statement released at the SANS Security Conference held in January in New Orleans. “We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet.”

While the CIA may not be much more forthcoming for fairly obvious reasons, there are lots of clear signs that our infrastructure is being menaced by more than rust and corrosion. In process plants, in water, wastewater, power, nuclear power, pipelines and in transportation, the trend over the past 20 years has been interconnection—interconnection of devices, of subsystems, of control systems; interconnection to government systems, to business partners, and of control systems to business and enterprise networks. This has led to a serious problem regarding protection of cyber-connected assets in all those industry verticals.

One of the very largest problems is that the control systems in plants and the SCADA systems that tie decentralized facilities like power, oil and gas pipelines, and water distribution and wastewater collection systems together were designed to be open, robust and easily operated and maintained—but not necessarily to be secure.

For example, at the ACS Cyber Security Conference in August 2008, Nate Kube of Wurldtech and Bryan Singer of Kenexis demonstrated that a TÜV-certified, safety-instrumented system could be hacked very easily. The unidentified system failed in an unsafe condition in less than 26 seconds after the attack commenced. Operating “cyber-securely” was not a design criterion.

Schweitzer Engineering Laboratories (SEL) had a utility on its website that allowed its Internet-enabled relays to be programmed via a Telnet client by any authorized user. Recently, several security researchers found and acted on it, and SEL has now taken the utility down to protect the users.

These cyber incidents have happened in many process industry verticals, whether they’ve been admitted to or not. It’s clear that it isn’t just terrorists, too.

Although the CIA’s Donahue says terrorists and gangsters have struck outside the U.S., in North America, cyber accidents have occurred more often than deliberate attacks. Mike Peters, of but not speaking for the Federal Energy Regulatory Commission (FERC), says, “It’s been 10 years since the various domestic and foreign terrorists started playing in cyberspace. They’ve gotten better and better at it. The ‘middle managers,’ who are much more current with cyber, have not yet succeeded to leadership roles where they can order something done. They’re collecting information, and they’re planning. It’s just a matter of time.”

In-House Screw-Ups

It may be a matter of time, but history shows that it is much more likely to be an internal screw-up that produces the problem.

In 1999, an operator for the Olympic Pipeline Co. in Bellingham, Wash., was working on his pipeline SCADA system. Unbeknownst to him, the scan rate of the SCADA system slowed to the point where critical process data failed to reach the SCADA HMI until after the pipeline ruptured, causing three deaths and numerous injuries. This is a classic cyber accident.

On March 7, 2008, the Southern Co.’s Hatch Unit 2 nuclear power station near Baxley, Ga., was operating at approximately 100% power. An engineer was testing a software change on the plant’s Chemistry Data Acquisition System (CDAS) server. The engineer did not realize that the vendor software automatically synchronizes data tags between connected computers running the software. When the local tag values were updated by his code, the changes were synchronized with the software running on the condensate demineralizer control PC. The updated values were sent to the PLC operating the demineralizers. Because the values being written were zeros, the PLC switched to manual control with 0% flow demand, and closed all seven condensate demineralizer outlet valves, resulting in an automatic scram of the plant.

On Aug. 19, 2006, operators at TVA’s Browns Ferry Unit 3 nuclear power plant, in northern Alabama, manually scrammed the unit following a loss of both reactor recirculation pumps. The initial investigation found that the recirculation pump VFD controllers were nonresponsive, and that the condensate demineralizer controller had also failed. The condensate demineralizer primary controller is a dual-redundant PLC system connected to the plant-integrated, Ethernet-based computer system network. The VFD controllers are also connected to this same plant-integrated control system network. TVA determined that the root cause of the event was the malfunction of the VFD controller because of excessive traffic on the plant-integrated control system network. TVA could not conclusively establish whether the failure of the PLC caused the VFD controllers to become nonresponsive, or the excessive network traffic, originating from a different source, caused both to fail. However, information received from the PLC vendor indicated that the PLC failure was a likely symptom of the excessive network traffic.

Lest you think this is all about the power and oil and gas industries, there is the case of Vitek Boden. Boden worked for Hunter Watertech, a firm that installed a SCADA system for the Maroochy Shire Council in Queensland, Australia. Later, Boden applied for a job with the council, but the council decided not to hire him. Consequently, Boden decided to get even with the council and his former employer. He packed his car with stolen radio equipment attached to a possibly stolen computer, and drove around the area on at least 46 occasions from Feb. 28 to April 23, 2000, issuing radio commands to the sewage equipment he probably helped install, causing raw sewage to spill into local parks, rivers and even the grounds of a Hyatt Regency hotel. Boden was caught and sentenced to two years in jail and ordered to reimburse the council for cleanup.

There’s evidence of more than 100 cyber incidents, whether intentional, malicious or accidental, in co-author Joe Weiss’ Real-Time ACS database. These include the 2003 Northeast power outage and the 2008 Florida power outage. Neither incident has been described as a cyber event by the power companies and transmission companies involved. In fact, these companies continue to state that they have very few critical cyber assets with most stating they have no power plants that are critical.

Now What?

So what do we know, and what do we do about it? Industrial control systems (ICS) are an integral part of the industrial infrastructure supporting the nation’s livelihood and economy. They aren’t going away, and starting over from scratch to secure them isn’t an option. ICSs are “systems of systems,” and need to be operated in a safe, efficient and secure manner.

The sometimes-competing goals of reliability and security are not just a North American issue, but truly a global one. A number of North American control system suppliers have development activities in countries with dubious credentials. A large North American control system supplier has a major code-writing office in China, and a European RTU manufacturer has code written in Iran.

While sharing basic constructs with enterprise IT business systems, ICSs are very different systems. Vulnerability disclosure philosophies are different, and applying the wrong one can have devastating consequences.

A major concern is the dearth of a workforce educated to cope with the problem. There are probably less than 100 living control system cybersecurity experts and currently no university curricula or ICS cybersecurity personnel certifications. Efforts to secure these critical systems are too diffuse, and do not specifically target the unique ICS aspects. The lack of ICS security expertise extends into the government arena, which has focused on repackaging IT solutions.

However, the convergence of mainstream IT and ICS systems requires that both mainstream and control system experts acknowledge the operating differences and accept the similarities. ICS cybersecurity is where mainstream IT security was 15 years ago—in the formative stage and needing support to leapfrog the previous IT learning curve. Regulation, regulatory incentives and industry self-interest are necessary to create an atmosphere for adequately securing critical infrastructures.

What can you and your company do to protect yourselves? The following recommendations, taken from a report to the bipartisan commission producing position papers for the incoming U.S. administration, provide steps to improve security and reliability of critical systems, and most are adoptable by any process industry business unit:

  • Develop a clear understanding of ICS cybersecurity;
  • Develop a clear understanding of the associated impacts on system reliability and safety on the part of industry, government and private citizens;
  • Define cyber threats in the broadest possible terms, including intentional, unintentional, natural and other, such as electromagnetic pulse (EMP) and electronic warfare against wireless devices;
  • Develop security technologies and best practices for field devices based upon real-world scenarios and actual and expected ICS cyber incidents;
  • Develop academic curricula in ICS cybersecurity;
  • Leverage appropriate IT technologies and best practices for securing workstations using commercial off-the-shelf operating systems;
  • Establish standard certification metrics for ICS processes, systems, personnel and cybersecurity;
  • Promote/mandate adoption of the NIST Risk Management Framework for all critical infrastructures or at least the industrial infrastructure subset;
  • Establish a global, non-governmental cyber incident response team (CIRT) for control systems staffed with control system experts for vulnerability disclosure and information sharing;
  • Establish a means for vetting ICS experts rather than using traditional security clearances;
  • Provide regulation and incentives for cybersecurity in critical infrastructure industries;
  • Establish, promote and support an open demonstration facility dedicated to best practices for ICS systems;
  • Include subject-matter experts with control system experience at high-level cybersecurity planning sessions;
  • Change the culture of manufacturing in critical industries so that security is considered as important as performance and safety;
  • Develop guidelines for adequately securing ICS environments similar to those in the Sarbanes-Oxley Act.
    Like process safety, process security is itself a process and must become part of the culture of inherent safety and security we all must develop.

Walt Boyes is Control’s Editor in Chief. Joe Weiss is president of Applied Control Solutions and author of ControlGlobal’s“Unfettered” blog.

Share Print Reprints Permissions

What are your comments?

You cannot post comments until you have logged in. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments