Regarding “DP level in a purged tank,” I never knew that the reactor differential pressure level may have contributed to the Fukushima events.
All boiling water reactors (BWRs) in the U.S. fixed this problem in 1993 (NRC Bulletin 93-03). I was the first to identify this issue in 1991 at Pilgrim, and confirmed it at Millstone Unit 1. Later testing by EPRI quantified the error exceeding 25 ft. The Nuclear Regulatory Commission (NRC) ordered all U.S. BWRs to fix this, costing millions of dollars per plant, but apparently Japan ignored this warning.
For this effort, I was awarded “Engineer of the Year” by Westinghouse, and was then retaliated against by Northeast Utilities and the NRC (confirmed by a criminal federal investigation). I was forced to leave the nuclear industry in 1993, and lived happily ever after.
Paul Blanch
[email protected]
I read Joe Weiss’ March 3 “Unfettered” blog post on the need to consider cybersecurity anytime a plant trips. Joe, this rings true with your prior discussions about logging facilities built into ICS devices, and the need for training goes beyond control system engineers. Root cause analysis generally involves subject matter experts from the supplier, which was part of the Trisis incident/near-miss.
Perhaps system logs were inadequate or not persisted. Indication of a core dump could still be enough for a supplier to suspect cyber-related causes, but even that might not have been available. ISA/IEC 62443-4-2 has provisions to address adequacy of logs. Like other shared responsibilities, it's less clear where and how to address training needs related to cyber root cause analysis.
Bryan Owen PE
[email protected]
On Joe Weiss' March 3 "Unfettered" blog post: while I agree that root cause analysis is critical, I disagree with the premise that cause doesn't matter. Further, I disagree that the impacts are the same. The amount of time and resources required to mitigate the issues are vastly different.
An accident or misconfiguration may require some additional safeguards, a new policy, set of procedures, and some retraining. However, the technical mitigation actions are likely to be somewhat localized. A malicious incident, on the other hand, may require a deep forensic analysis of the entire enterprise because you must identify all the potential points of access that were used or newly created. Additionally, you must compare the systems' current state to the baseline.
By understanding the threat (capability + intent), be they malicious insider or remote actor, we can determine how far we need to carry our mitigation efforts. Is this actor capable of corrupting firmware, or were they just exploiting a publicly disclosed vulnerability? Is the intent to continue to do us harm, or was this a one-time action? Do we need to keep an eye on this person or group to see if their capabilities and intents evolve? Technical and other resources required to properly address malicious incidents are vastly different than those of accidents.
Darin Harris
[email protected]