This article was printed in CONTROL's December 2009 edition.
By John Rezabek, Contributing Editor
Ten years ago, early adopters concerned about the real reliability of fieldbus technology devised guidelines for device criticality to optimize segment loading. In detailed design, the process control team assessed the potential of each device to cause an untimely process interruption. Those that would immediately shut down the process were assigned the highest criticality, and those whose failure would be relatively inconsequential, the lowest. The critical devices were segregated on lightly loaded segments and H1 interface cards, and those with low criticality were loaded on segments to the maximum practical number, with considerations for physical device location and function. The total number of control valves on critical segments was set as low as one. So we felt better, at least, that we would have less risk manipulating or maintaining a non-critical device and having an adverse impact on a critical control loop.
Experience has borne out, however, that H1 is so reliable that some users now load all segments more or less equally (e.g., aiming for 12 devices per segment). Transmitters and valves from non-critical services are combined on segments with the most critical ones, assuming the practical matters of proximity make it worthwhile. The latest draft of the Fieldbus Foundation AG-181 systems engineering guide has incorporated this as a recommended practice.
Criticality ranking may have some use, however, for getting the best value from device diagnostics. As instruments and systems are released that support NAMUR NE107 diagnostic alarm prioritization and routing, we're seeking a method for determining which device alarms get enabled and given a high priority. For example, setting the alerts for low instrument air supply on every valve positioner sounds like a great idea, until the night the whole header slumps, and your operator has to deal with potentially hundreds of redundant alerts. Some experienced practitioners are using the old criticality rankings to devise alerts, and pare the potentially vast number of device diagnostics down to the few that may be of real value.
For most reliability-focused users, the task of actually doing this ranking is a little daunting. It can prove challenging to assemble the right resources and people to devote their time to it. So, some consultants have appeared to help us.
For example, I have found the PlantWeb Services division of Emerson Process Management useful. I think users will find its methodology for arriving at a "maintenance priority index" (MPI) compellingly logical.
To rank a device, begin by dividing the plant into functional systems, for example, "steam system" or "boiler 1." Apply such metrics as the system's impact on safety, environmental compliance, product quality and throughput to get a "system criticality ranking." So, for example, one determines that boiler 1, which has no spare, has a relatively high system criticality compared to the instrument air system fed by redundant compressors.
Next, operations specialists assess the importance of the assets that enable them to keep the system on-line, in effect asking, "If I lose this, what will be the effect on the process?" So, in the case of the boiler, operations may assign a high "operational criticality ranking" (OCR) to boiler feedwater pumps or steam drum level instruments.
Following the derivation of OCR, the asset's "failure probability factor" is applied, which I'd read as "mean time to fail." So, an unreliable level instrument on the critical boiler will end up with the highest MPI. Such community-derived prioritization has some side benefits, among them the mutual acknowledgement of maintenance that can be deferred to planned maintenance.
If you're aiming to improve the usefulness of your digitally integrated intelligent field devices, there's help available to help you get moving down this road. Getting it right can make a measureable difference in the effectiveness of your maintenance efforts.