For the purists among us, IEC 61511 began life as ISA S84. S84 was harmonized with IEC 61511 in 2000. At the time of harmonization, S84 retained a "grandfather" clause. The concept of the "grandfather clause" in ISA-84.01-2004-1 originated with OSHA 1910.119.
The grandfather clause's intent is to recognize prior good engineering practices (e.g., ANSI/ISA-84.01-1996) and to allow their continued use with regard to existing SIS. The grandfather clause (ISA-84.01-2004-1 Clause 1.0 y) states: "For existing SIS designed and constructed in accordance with codes, standard, or practices prior to the issuance of this standard (e.g., ASI/ISA-84.01-1996), the owner/operator shall determine that the equipment is designed, maintained, inspected, tested and operated in a safe manner."
The grandfather clause establishes that the owner/operator of an SIS designed and constructed prior to the issuance of the standard should demonstrate that the "equipment is designed, maintained, inspected, tested and operating in a safe manner." There are two essential steps:
Confirm that a hazard and risk analysis has been done to determine qualitatively or quantitatively the level of risk reduction needed for each SIF in the SIS.
Confirm that an assessment of the existing SIF has been performed to determine that it delivers the needed level of risk reduction.
You are allowed to leave the system as is if you can determine that the systems are operating safely. Some users cling to the notion that the comparison is against the practices at the time the plant was built?something akin to building codes. Their thinking is that if they don't change anything they are fine. That's incorrect thinking.
The ALARP (as low as reasonable practicable) concept requires that the risk be driven lower when the costs are practical. New practices sometimes include practical things, such as very affordable SIS solutions. The civil court and regulatory systems also seem to want them. So, there are cost and moral arguments for moving forward with partial upgrades as they become practical and feasible.
Technically, the S84 committee documented in TR84.00.04 that the determination had to be at least based on a risk assessment of the current design and management system to determine the risk reduction required and verify that the installed systems are capable of achieving it.
Practically, the equipment performance is estimated for the purposes of the design calculation. Then the performance is monitored in the field and when the performance does not match expectations, the assumptions have been invalidated and the risk gap must be addressed. This involves root cause analysis to understand whether the frequency of failure can be reduced. In some cases, this will result in the replacement of the existing equipment with better-performing models.
Ultimately, each SIS solution is likely to organically evolve as problems are found or when better technology becomes available that has advantages that outweigh its costs.
The key principles of both IEC standards are the:
safety life cycle
safety integrity levels (SIL).
The safety life cycle is just what you imagine; a continuous review and improvement cycle that has been designed to specifically address the safety system from its initial design to its eventual retirement.
We've already discussed SIL (safety integrity level) so we won't rehash it here.
To understand Gomer's comment "…how OSHA feels about IEC…" we need to look at two items.
The first is the U.S. National Technology Transfer and Advancement Act of 1995. This act requires that all federal agencies (i.e., EPA, FDA, OSHA, etc.) recognize existing consensus standards, such as IEC 61511 and IEC 61508. That means that all government agencies have been instructed to accept the premise of consensus standards and abide by the standards' requirements.
Second, in 2000, OSHA sent a letter to ISA. In that letter OSHA acknowledged that S84 (now IEC 61511) had been officially recognized and generally accepted as good engineering practices for SIS.
Additionally, though OSHA's 1910.119 (PSM – Process Safety Management) regulation does not include specific information on the requirements for safety systems, it does require that facilities perform a process hazard analysis (PHA) and take measures to mitigate identified risks. OSHA's mention of safety systems is simply, "The employer shall document that equipment complies with recognized and generally accepted good engineering practices." When we consider that simple statement alongside the 1995 Technology Transfer and Advancement Act, we can only conclude that IEC 61508 and IEC 61511 or something very similar, must be followed.
What Gomer was subtly reminding Mr. Barns was that if the plant had an incident that resulted in an OSHA investigation, the investigators would quickly realize that the plant was not conforming to IEC safety standards, and fines would most certainly be levied and someone might even end up going to jail.
AND HERE IT COMES
"What would really be helpful is if we replaced our old SIS with one from the same vendor as our BPCS; that way everything would be 'smart'."
"Ah" thinks Mr. Barns, "all of this gibberish was just a guise to get me to buy some new toys for the boys."
We've already discussed SIS and BPCS, so let's take a quick peek at what Gomer meant by "smart."
Each new automobile year brings us ever more technologically advanced vehicles. As you touch the door latch of one of today's luxury vehicles, it recognizes who you are and begins adjusting the seat, mirrors and sound system to your preferences. When you put the key in the ignition and turn it to start, or in some cases simply push the "smart" start button, you witness a series of automated checks and diagnostics being performed. Anti-lock brakes – OK; fluid levels – OK; front air bags – OK; side air bags – OK; light bulbs – OK; navigation system – OK; tire pressure – OK, …you get the idea.
All of this and more is the result of digital technologies and a high-speed digital communication network that is enabling advanced levels of diagnostics designed to make 21st-century automobiles more reliable and safer.
Not surprising, process control and related safety instrumented systems are also taking advantage of similar digital technologies. No longer do analog transmitters (i.e., pressure, temperature, level) provide only a measured value. T they are also capable of running diagnostics to determine if the process sensing lines are plugged, or if the actual sensor is drifting out of range. Final elements (i.e., pumps, motors, valves, etc.) can also be loaded with digital technology. For example, onboard motor diagnostics can detect failing bearings and higher than normal temperatures. Also, valve diagnostics can tell if the valve is sticking or is not fully closing when it should. Inside microprocessor-based controllers and logic solvers, a host of checks for memory errors, unauthorized changes, and changes that might prove harmful given the current process state are constantly taking place. Our digital communication networks are constantly verifying the presences of other devices and the validity of data sent and received. In short, the list of available diagnostics in 21s -century control and safety systems made available by digital technologies is quite impressive and growing every year.
So when Gomer said, "…that way everything would be ‘smart,'" he was talking about taking advantage of all the advanced diagnostics that have become a part of every 21st -century digital control and safety system on the market. Yes, there are differences among manufacturers, but thanks to open standards the differences are not that great.
It comes down to this: Why wouldn't you want your plant to be at least as capable of self-diagnosing as the car you drive?
Note: There are ongoing arguments about the pros and cons of integrating the SIS and BPCS. Here's the simple truth. The IEC safety standards insist that the ability of the SIS to perform its actions on demand never be compromised. That makes perfect sense. However, the safety standards do permit the SIS to share what it is doing with "outsiders." I like to refer to that sort of sharing as "observational integration," meaning that the SIS information is displayed on the same operator interface used to interact with the BPCS, however, the operators are not permitted to change anything in the SIS. It's somewhat akin to watching the instrument panel of your Chevrolet Impala as it goes through its pre-start checks. You are made aware of what's going on, but you can't change things. That same form of "observational integration" is permitted with the SIS.
SEEK INTEGRATED FEATURES
When evaluating integrated BPCS/SIS solutions, here are the major features to look for:
Secure Separated Databases - Separate databases securely store the safety and control strategies and make use of separate and unique software modules using dedicated tools. Maintaining separate tools with separate databases prevents unauthorized changes or corruptions, decreases safety risks and reduces the possibility of common cause failures.
Database Integrity and Security – Pre-configured modules that are protected from viruses and harmful hacking by built-in protection mechanisms that check the integrity of the software before installation, after installation and during run time. Seek solutions that ensure the integrity of all data accessed through the SIS engineering workstation and the integrity of the application software residing in the SIS logic solver is protected against unauthorized changes during the entire SIS life cycle.
Managed and Protected Database Environment - Seek a secure, multi-level login scheme that protects the SIS solution from inadvertent and unauthorized changes. Such a login scheme will use a dedicated protection mechanism with several access levels for the engineering application, loading of the application in the controller and forcing points in the SIS logic solver. It will also include an automated user password expiration and automated logoff after a pre-defined period of inactivity, thus protecting applications from accidental or unauthorized changes.
Dedicated Software and Hardware - Seek solutions that use dedicated SIS hardware and software that has been intentionally designed and third-party- certified according to IEC61508 safety standards. Additionally, verifying that the BPCS and SIS hardware and software are separate and diverse minimizes the risk of common mode failures. During implementation, ensure that safety and process control strategies are developed and tested by different groups using dedicated methods.
"Of course that means we really need to install exida- or TÜV-certified sensors and final elements."
Each time we hear that something needs to be certified, we also see dollars going out the door. That can be true for SIS devices.
When Gomer referenced exida and TÜV, he was talking about two, independent third-party certification organizations.
When it comes to specifying SIS devices, the IEC safety standards give you two options:
Self-prove each device
Purchase certified devices (purchased certified).
An owner/operators decision to self-prove SIS devices requires a robust self-certification process that captures and documents the information and performance of the various devices that are being self-proven. The information about devices that the self-certification process must document includes:
a clear description of each device's design revision information;
reliability data for identical or very similar applications, including applicable conditions and/or restrictions for use of each device;
results of operating software compliance as defined in IEC 61508-3;
procedures in place to verify that each device meets functional requirements, is qualified (rated) for use in the expected environment, and the materials of construction are suitable for expected process conditions, including actual test results from use in similar, but non-safety critical applications;
acknowledged competency to review the design aspects of both mechanical and/or electrical components, including component failure modes, fail-safe vs. fail-danger, any claimed automatic diagnostics and internal redundancy in order to produce a quantitative failure rate. (This number will eventually be used in calculations that determine if a particular design meets its defined SIL requirements);
acknowledged competency that is capable of combining sophisticated design analysis processes, tools and testing methods with a thorough review of both the devices original design and all subsequent modifications to the electrical, mechanical and software aspects of each device with the intent of uncovering design errors;
regularly conducted audits of each device's manufacturers change- management processes for each device being used or being considered for use in an overfill protection system; and
a documented "safety case" describing, in significant detail, how each manufacturer's device meet each requirement of IEC 61508.
Because a self-certification program must capture actual operating experience, IEC 61508 does provide minimum operating experience guideline hours.
While meeting the above requirements is onerous, that is not all that you must do. You must also be able to show that you were able to detect and record each and every dangerous failure that occurred during these time periods. In short, your self-certification process must be almost 100% effective at capturing device failures.
Your engineers may argue that your company has really great operating experience with this or that device. That may be true, but you need to ask them two questions:
1) Can they show you the documentation about this or that "self-proven" device;
2) Who verified that the "self-certification" process used to produce that documentation meets IEC requirements?
The importance of asking those two questions and being comfortable with the answers is summed up by asking yourself; "If we get it wrong, who is most likely to go to jail?"
The alternative to establishing a self-certification process is to utilize certified devices from any one of the growing number of manufacturers offering devices certified for SIS applications.
When you look closely at how manufacturers are certifying devices for SIS applications you find three different approaches:
• Self-developed device certification processes;
• Self-developed and third-party audited device certification processes;
• Independent third-party device certification.
The first is self-explanatory. The manufacturer develops the certification process per its interpretation of IEC standards and certifies that its devices are tested and proven against those processes.
The second is similar to the first with the caveat of having an independent third-party review that the manufacturer developed procedures are consistent with IEC standards.
Manufacturers choosing the third approach submit their devices to an independent third-party (i.e., exida and/or TÜV) who then uses its own certification processes to test and certify that the device meets the manufacturer's safety system claims.
Regardless of which of the three "purchased certified" approach is used, when you purchase a certified device, it should come with a copy of the device certification and a detailed user safety manual that includes such things as restrictions on where and/or how the device may be used.
Yep, regardless of how you go about it, certification equates to dollars, but in the case of conforming to IEC safety standards, you can either spend dollars to establish your own self-certification process, or you can pay the slightly higher per device cost and purchase third-party certified SIS devices. Doing neither and having a major incident will undoubtedly cost even more.
With that I would like to leave you with this thought; the most likely cause of a shut-down that was initiated by the Safety Instrumented System (SIS) is most likely the result of one of three things:
1. A sensor (input) provided a false signal to the logic solver;
2. Some form of human error occurred;
3. There was an unsafe process condition, and the SIS did exactly what it was designed to do.
If you are told the cause of the shut-down was because of a false signal (#1), you need to review the maintenance and testing procedures of the sensor, logic solver and final element for that SIF (safety loop). It shouldn't take too long to review that one SIF, and the findings will likely reveal that your procedures are inadequate or more likely, aren't being followed. Either way the time spent to find and correct the cause of that unscheduled shut-down will pay huge dividends in the future.
If the cause of the shut-down was the result of human error (#2), you need to review training and operational procedures. One common cause is the result of technicians performing scheduled, routine testing of the SIS while the process remains operational. For example, manually conducted partial-stroke testing is a fairly complex procedure requiring the proper application of mechanical travel stops. Sometimes the travel stops are improperly installed. The result is an unscheduled shutdown.
If the cause of the shut-down was because an unsafe condition was detected (#3), you need to review what was going on with the process and the BPCS and perhaps more important, you need to review why your operators didn't notice and take appropriate action before the process reached an unsafe condition.
Regardless of the findings, every unscheduled shut-down should be viewed as an opportunity to improve production performance.