If They Can, Why Haven’t They?

Sept. 5, 2008
It’s the Time Between When the Vulnerability Is Found and When It Is Fixed by the Vendor and the Fix Is Installed by the End User That Is Scary
By Walt Boyes, Editor in Chief

Our critical infrastructure is very fragile. Our power plants, natural gas pipelines, chemical plants and refineries need to be better protected from the threats we know are aimed at them. This was the point made by U.S. Rep. Jim Langevin (D-R.I.) in his keynote speech at the ACS Control System Cyber Security Conference in August. (If you’re interested, you can watch Langenvin’s keynote Rep James R Langevin, (D, Rhode Island) on Cybersecurity). The congressman is absolutely correct.

One of the problems, though, is the same problem we’ve been facing with ensuring plant safety all these years; that is, the use of risk analysis to decide whether, and how much, to implement safety systems. Risk analysis in process safety is often based on the number of reported workplace accidents, which has little or nothing to do with whether the particular process is designed or being operated safely. So we still have plants blowing up.
I’m hearing that kind of risk analysis rear its ugly head in industrial cybersecurity too. CEOs and CIOs are saying, “Well, it’s been 10 years since we started to get these warnings. Obviously, if they could strike us they would have already.” Others are saying, “This is just another Y2K! Nothing happened then, and nothing will happen now. All you want is a bigger budget.”

Mike Peters, of FERC, the Federal Energy Regulatory Commission, (speaking for himself, and not for FERC), gave me an answer as to why we haven’t been hit with a major attack yet. “It’s been 10 years since the various domestic and foreign terrorists started playing in cyberspace. They’ve gotten better and better at it. But the leadership, even though it may be technically trained, is acting as spiritual leaders, and the middle managers, who are much more current with cyber, have not yet succeeded to leadership roles where they can order something done. They’re collecting information, and they’re planning. It’s just a matter of time.”

However, what we could use is not only more time, but also a defined way to protect our infrastructure, and especially the end users who operate it. It takes time for operators to make sure that patches work. Sometimes, it can take months, and still there can be problems.

Vendors create software. Their testing protocols are all different. Security researchers find vulnerabilities in that software. Their methodologies differ, and so do their motivations. Some researchers are sterling upright, people, and some are no different than the hackers they associate with at the Defcon or Black Hat conferences. Luckily, we have many dedicated, upright and professional security researchers, who have managed to find many vulnerabilities in industrial control software, including SCADA systems, DCS systems, PLC and HMI systems.
But it is the Wild West out there. I helped establish ISCI, the ISA Security Compliance Institute, and I serve on ISA99 because there needs to be a standard for control system cybersecurity, and there needs to be a way to make sure that products conform to that standard. But there’s still a need for a global (and that means non-governmental) way, designed specifically for control systems, to centralize reporting of vulnerabilities and patches and testing those patches on a live, open test bed, so that end users and companies can reduce the time-to-fix of a security vulnerability.

Disclosure isn’t bad or wrong. It is the time between when the vulnerability is found and when it is both fixed by the vendor and installed and tested by the end user that is scary. The longer that time, the more likely that vulnerability will be exploited.

So who’s going to step up and help create that global non-governmental entity and make it work?