By Michael Peters, Infrastructure and Cybersecurity Advisor
A recent U.S. General Accounting Office (GAO) report [GAO-10-628, "Critical Infrastructure Protection Key Private and Public Cyber Expectations Need to Be Consistently Addressed," www.gao.gov/products/GAO-10-628 ] reveals that a key expectation from industry is for actionable cyber-threat information from the federal government.
The dissemination of this tactical level of information has not been completely met (see "Threat" vs. "Tactical" Information). Because of this lack of information, a company may choose not to implement cybersecurity defenses because it feels there is no threat. I believe this reliance on tactical threat information is a false interpretation of the environment, and is a major impediment to securing our critical infrastructures from attack.
I do not believe this tactical level of information is necessary for a critical infrastructure company to implement cybersecurity defenses. The federal government has provided strategic-level, cyber-threat information to the various critical infrastructures, and this type of information sharing can easily continue because the strategic threat is the information that the government most likely will be able to acquire and distribute.
However, even this level of threat information really isn't necessary in order to justify and implement cybersecurity defenses. Many threat actors exist today that can impact the security of a control system—traditional hackers, criminals, disgruntled insiders, terrorists and nation-states. All of them have a range of capabilities and intents, though the common assumption is that the nation-state is the most technically sophisticated, and the hacker is the least. Many of these adversaries are capable of very structured as well as unstructured operations. What is crucial is that the level of sophistication, structure and capabilities varies for all the adversary types. A security professional should never assume that a specific type of adversary has "specific" traits.
Wobbly Threat Leg?
Understanding these adversaries and determining their capabilities and intents is a very difficult problem and often results in less-than-complete information. However, this information forms the basis of the "threat leg" of the traditional risk equation: Risk = Threat x Vulnerability x Consequences. This lack of information often results in reducing the perceived risk to the system. However, what every critical infrastructure company should assume is that one or more of these adversaries eventually will attack them. Critical infrastructure companies should assume that the threat level is "1," meaning a viable cyber threat to their control systems exists. What threat actor attacks them is immaterial. What companies and their customers should care about is that their system has been exploited, and the services/products that the company provides are not available.
Now, a frequent counter-argument raised by the critical infrastructure companies is that they can't afford to address everything, and that, without this threat information they don't know what to fix and how to spend their resources. While this is true, I believe that there is a better way of determining where to spend scarce cybersecurity dollars than waiting for tactical cyber-threat information that they may not receive and would probably be constantly changing, even if it were readily available.
Two Security Perspectives
I think critical infrastructure companies should examine themselves from two main perspectives, and not rely on threat information.
The first is most directly tied to the mission of the company, whether it is providing electricity, making potable water, refining gasoline or manufacturing televisions, etc. Companies create "tiger teams" of specialists, including their most knowledgeable operators, control system experts and IT personnel, and charge them with the task of developing scenarios for causing the most harm, destruction or danger to company personnel or to the public. These people have detailed intimate knowledge of the company's systems and processes, and they will often know exactly how to cause the most damage to operations. They can build on this knowledge and determine how to best mitigate the attack vectors that they developed.
The second perspective is from a traditional vulnerability assessment/evaluation arena. Critical infrastructure companies need to examine their systems looking for vulnerabilities; determine the consequences/impacts to the company's operations of a successful exploitation of the vulnerability; determine the capabilities that are necessary to successfully exploit the vulnerability and cause the identified consequences; determine whether the capabilities needed to successfully exploit the vulnerability currently exist, and whether these capabilities are easy to use; and finally, tdetermine how to mitigate the vulnerability identified and to minimize the impact of a successful exploitation. The company should also answer all of these questions for the scenarios developed by its internal tiger team.
Now the company can prioritize what it fixes by working through the results of the above analysis. Vulnerabilities with high/major impacts, where the capabilities to successfully exploit currently exist and are easy to use, should be fixed first.
The overall goal is to improve the security of the system, and the above methodology only uses the vulnerabilities and consequences—information that is most likely known—rather than needing threat information which is typically unknown. (This is information that is definitely unknown at the tactical level and often considered not detailed enough at the strategic level.)
Learning from Accidents
One other area where critical infrastructure companies can gather information they can use to convince senior executives to authorize the implementation of cybersecurity defenses is to examine real-world industrial incidents/accidents, and see if they can extrapolate a purely cyber scenario that results in the same consequences. For instance, most industrial accidents involve three legs, including a physical issue/problem,some form of human error, and a cyber issue, such as a cyber system not running, cyber system running, but on incorrect data, or a malicious cyber attack, which are currently rare.
For some industrial accidents, it is quite simple to extrapolate to a purely cyber vector to cause the same consequences as the original accident. However, this is normally done by considering two main assumptions. The first is that an electronic pathway exists from the targeted control system to the outside world. A disgruntled insider needs to be considered as well. The second assumption is that this electronic pathway is exploitable, and the likelihood of this is very high. You could simply assume a supply chain issue that allowed the adversary to implant his malicious access at an earlier stage.
I believe that by undertaking the above three efforts, any critical infrastructure company will have developed/acquired enough information to convince its senior executives that cybersecurity defenses must be implemented to ensure that the company can continue to carry out its mission safely, reliably and securely without needing tactical cyber threat information from the government before they are persuaded to act to adequately secure their control systems.
There is one arena where tactical actionable cyber threat information of a potential attack is needed prior to making decisions to implement basic cyber defense mechanisms. Mechanisms must be developed and deployed that allow information to be shared when an attack is occurring, which will allow companies not under attack to ramp up their defenses to prevent the current attack from succeeding. This assumes, however, that the companies have already implemented cybersecurity defense measures and have developed the plans and procedures to rapidly increase their cybersecurity defense posture.
The Bottom Line
Critical infrastructure companies should not depend on tactical cyber-threat information to deploy cybersecurity defense. Instead, they should consider that the cyber threat is "1," and focus on understanding their vulnerabilities and the consequences of a successful exploitation of them. Waiting for tactical cyber-threat information could delay critical them from examining their systems from a mission perspective and implementing appropriate defenses. The discussions concerning tactical cyber threats and the resulting expectations (and need for clearances for industry personnel) are primarily a distraction, and are being used to justify a lack of action for implementing cyber defenses. The government and the critical infrastructures need to get past this self-imposed roadblock.
Michael Peters is an energy infrastructure and cybersecurity advisor for the Federal Energy Regulatory Commission's Office of Electric Reliability. He specializes in analyzing cybersecurity issues, including those affecting control systems. This article is personal opinion and does not represent the opinion or position of the Federal Energy Regulatory Commission or the federal government.