Thoughts on DHS Metrics Primer

The June 2009 DHS Primer Control Systems Cyber Security Framework and Technical Metrics report is meant to address a critical missing link – metrics for control system cyber security. It is a good start. My comments come from the perspective of how does this Primer address actual control system cyber incidents.  That statement leads to my first concern – most control system cyber events are incidents not attacks. Many of these actual incidents have caused significant damage and yet did not violate IT security policies. However, the Primer is focused on malicious IT-type attacks. Another concern is security knowledge. According to the Primer, “The security group represents those people in an organization who are directly responsible for the cyber security of the control systems.” Many security groups are staffed by IT-trained security experts. There are very few people that actually understand control system cyber and most are not in the security group. There have already been numerous cases where the security organization CAUSED the control system cyber incident. Not only does the metric not account for this, having the wrong people doing the wrong things should lead to a NEGATIVE metric. The final concern is the Primer simply does not recognize the unique issues with legacy control systems. Many systems cannot take complex passwords. Many systems simply cannot be patched expeditiously, if at all.

I am simply not seeing much coming out of the DHS Control Systems Cyber Security Program to address legacy control system issues or the actual incidents that have occurred.

Joe Weiss

Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


  • <p> Attacks (or the remote possibility of them) SELL. Take the average cyber incident that happens every other day somewhere around the globe, where systems fail due to network overload, accidental misconfiguration, random malware, sloppy change management, and nobody wants to hear about it. There must be some kind of link established to attackers, preferably from Russia or China, or to islamist terrorists at best: They COULD have exploited the demonstrated vulnerability and COULD have caused even more damage. A factual damage worth 50k doesn't count if it was caused accidentally. It matters only if an attacker COULD have produced even 500k worth of damage. So we ignore the reasons (or threats, if one prefers this term) that account for the vast majority of incidents, no matter if looked at by number of occurence or by extent of damage (2003 East Coast power blackout: 4 billion $, Bellingham pipeline accident: 3 dead, 8 injured). Accidents don't sell. On the other hand, unsubstantiated rumors about foreign hackers penetrating systems of the US power grid with zero recognizable damage (if there was any fact behind this story at all) do sell, they even make it into a presidential speech. </p> <p> Basically, the obsession with cyber attacks may easily be explained by economics. In the research market, there are buyers (mainly the US government and associated organizations) who buy "cyber attacks", i.e. theories on what COULD happen and how to defend against it. The state of the art has reached a point where it could be considered science fiction. There is one (1) FACTUAL, convicted attacker (you all know his name), and there are thousands of FICTIONAL attackers, zero-day-exploits, denial-of-control attacks and so forth. </p> <p> Why would any researcher, security expert, control system guru invest valuable time in working on non-intentional incidents if there is no demand for such effort? After all, you get what you pay for. If you don't pay for (information about) non-intentional incidents, you won't get some. Come up with wild speculations about Chinese hackers, and you make it into the Washington Post. Our control system installations are so fragile that it only takes a well intentioned, trained maintenance engineer doing a policy compliant network scan or installing an OS update to bring down a facility, but that's not noteworthy, and few organizations invest money to do something about it. My prediction is that the obsession with fictitious attacks will hold only several more years until the major players (i.e. investors) and the general public are saturated with this type of science fiction. My hope is that thereafter, we can focus on real-life problems. </p>


  • <p> Ralph – I wholeheartedly agree. That is why we will have a demonstration at the October ACS Conference where one of the National Labs will use a PLC and variable speed drive to demonstrate how sensitive these systems are to electronic issues including scanning the networks. The reason for performing this demonstration is at least one nuclear plant and several process facilities have ALREADY been shutdown because of these systems being impacted electronically. This is a reliability and economic issue of maintaining the end-user’s most critical assets. </p> <p> We need to get the discussion back to where it belongs – these are reliability issues that can have security impacts not the other way around. Or as Walt Boyes says, these are functional security issues. </p> <p> Joe Weiss </p>


  • <p> Joe,<br><br> I'm glad you are giving the primer some blog time. As a contributor to this work (now able to publicly respond) I feel obliged to speak up here. The two professionals who took on the metrics task at INL for DHS are intelligent professionals whom you ought to meet.<br><br> First let me provide some background on the work. The DHS (reservedly) sponsored two researchers to come up with a set (the original idea was five) of metrics to measure ICS security. The researchers visited several conferences in 2006 and 2007 – including PCSF – to gauge interest and request input. Response to the topic in general was less than spectacular.<br><br> Measurement is not an engaging field. Not nearly as engaging as finding vulnerabilities, discussing attacks, or criticizing someone else's work. Moreover, security is not easy to measure. There are a number of books out there on measuring security. One had over 1,000 ways to measure security attributes of a system.<br><br> So the first problem was how to come up with a manageable number of metrics (10 or fewer) capable of covering a wide range of security issues. Using a remarkably intuitive approach, the researchers came up with an ideal driven framework; hence the seven ideals. When I saw the ideals I immediately liked the idea because they help tell a security manager how to improve system security using understandable concepts. To me that is the great contribution of this work to the state of the practice.<br><br> The researchers then set out to determine how to measure these ideals. The understanding was that no set of ten metrics would ever be complete or perfect. The researchers scoured literature to identify proposed security metrics to cover the most important attributes of a system – with at least one metric per ideal.<br><br> After literature review, the researchers identified and defined 13 technical metrics. A technical metric means one that can be obtained from direct control system observation. (Meaning, for example, that the number of control systems staff trained in security [although an important consideration] would not be included in the set of metrics because they are not part of the technical system per the definition used). The researchers also tried to make the metrics easy to understand – meaning that the metrics deal with whole numbers rather than ratios or percentages.<br><br> The 13 metrics were then applied to two ICS environments. This helped the researchers refine the definitions. Finally, the metrics were compared with the findings of about 10 control systems security assessments. This is where I entered the picture. The seven ideals covered all the findings, with at least one finding per ideal. The 13 metrics covered about 86% of the findings, and the 10 core metrics covered about 84%.<br><br> Though the 10 metrics are far from perfect and not without limitations (note that in the Primer each of the recommended metrics includes a description of its weaknesses), the fact that the researchers created a usable framework to identify metrics, made some effort to validate and apply the metrics, and obtained positive results, indicates that the framework is precisely as you said “a good start.”<br><br> Regardless of approach, I would love to see asset owners implement ICS security metrics programs to track their own progress over time. </p> <p> <br> If you would like to learn more about the research without contacting the researchers themselves, you can check out the Ideal Driven Technical Metrics paper from S4 2008. </p>


RSS feed for comments on this page | RSS feed for all comments