About how risk management works…and doesn’t work

ISA SP99 is working on the Part II standard. The current discussion is on risk. I am including my response looking for discussion on this subject. My premise is that traditional risk methodology (frequency * consequence) does not apply to control system cyber security. My reasoning is as follows: - I do not believe we will ever get enough control system cyber incident data to have a statistical basis for frequency - In the control system cyber world, frequency is temporal. That is, until there is an incident, the frequency can be whatever has been hypothesized. After the event, the frequency is 1 until there is confirmatory mitigation. What’s more, if the exploit was due to a vulnerability in the control system or network design, it might then affect any user of that control system or network design only they may not know there frequency just went from very small to 1. - To be conservative, the consequence should be the worst case design basis, because if the control system is compromised, the attacker could perform a wide range of exploits. In actuality, the design basis may not even be conservative enough because it assumes systems fail in a fail-safe manner. We will be demonstrating at the August Control System Cyber Security Conference in Chicago the hack of a safety system which will preclude the system from failing in a fail-safe manner. Consequently, I believe the risk section should simply state the frequency is 1 and the consequence is the worst case design basis. This approach will also impact the risk assessment methodology for NERC CIP-002. Many utilities are using the N-1 deterministic criteria to justify eliminating most assets from being considered critical. Based on my premise and common sense, that doesn’t work. Joe

What are your comments?

You cannot post comments until you have logged in. Login Here.

Comments

  • The fundamental problem I have with security risk analysis is that it looks backward at what exploits there have been.

    In a broad sense, this works. For example, if you know that the majority of threats come from the inside, you'll design your system accordingly. However, If you know that the majority of threats came from RPC hacks on unpatched Microsoft OSs, then the validity of that data diminishes when you patch those vulnerabilities.

    With Safety systems we don't assume malice. Safety violations are usually not the result of creative hacks. However, it's just the opposite for security hacks. In security, the untried attack vector is usually the most dangerous. In safety, random walks would stumble across the most likely events. In security, people actively seek the vulnerabilities.

    That is why I feel the use of statistical approaches to determine where to spend one's effort to secure a system is particularly futile. We can know the broad outlines of a security problem, but the specifics will always be shrouded in the dim light of our proverbial hacker's basement.

    Reply

  • Rather than blindly use rear-view mirror probabilities in the risk analysis, DOE and others emphasize the approach of computing risk as a more inside out function of susceptibility and consequences i.e. R=f(S,C). Susceptability (or vulnerability) reflect technical exposure; while consequences are more application specific (control, safety, etc). A self assessment methodology based on this could help provide graded results, change/apply "what ifs" to see improvements.

    Reply

RSS feed for comments on this page | RSS feed for all comments