This article was printed in CONTROL's April 2009 edition.
Back in the 1980s, a mainstay of any important project was a “calibration trailer.” Every transmitter, switch, valve and thermometer passed through it before being released for the assigned craft to install. Inside, the device was subject to raw signals that spanned its calibrated range, and its outputs were tweaked to bring it into specification. Besides checking and fine-tuning any calibration, this effort also identified DOA (dead on arrival) devices, inappropriate materials or ratings, correct temperature element type and length, fail position and bench set of control valves, and devices delivered with insufficient capabilities for the specified service.
These practices started to die out when digital devices gained robust and stable factory calibrations that exceeded the capability of most end users’ instrument departments. Many process plants also allowed their instrument departments to shrink and stopped supporting their employees’ calibration skills. Driven by these factors, our practices have quietly migrated to being largely “plug-n-play.” Or is it “plug-n-pray?”
The vast majority of control valves now can go straight from their shipping pallet into the line, and once instrument air and signals are hooked up (also in place), then a thorough, largely unattended setup, calibration and optimization (positioner tuning) can be performed in place, typically from the control house.
My site doesn’t even own a vice large enough to hold half our valves. We used to have someone stand by the valve to verify a “50%” setting for linearity. Our local representative kept suggesting “you could just click ‘default.’” And that’s what we do now. At the end of the process, we check 50%, 98%, 100%, 2% and 0% with a human nearby watching. What used to be perhaps a man-day or two of effort is now measured in minutes. The plug-n-play way is relatively risk-free because things such as fail position are accounted for in the process of commissioning.
Transmitters have steadily improved to the point that Emerson Rosemount now offers a five-year stability guarantee. The company promises that if one of its premium transmitters fails to hold its factory calibration to within some fraction of a percent for five years, you get a new one. I haven’t taken much advantage of this offer because over nine years of quarterly tests on their “old” transmitters, our technicians rarely find a device sufficiently far out of tolerance (1% in our case) to make any correction. They use a hazardous-area-approved Fluke 725 Ex Intrinsically Safe process calibrator, while our neighbors across the fence use the self-documenting Beamex calibrators with similar results. Except for checks mandated by contract or those that are part of maintaining safety instrumented functions (SIF), you can probably cut the frequency of checks on such non-drifting instruments.
If you’re still pulling devices out of the line and taking them back to the shop for calibration checks, then using a portable calibrator with certified pressure modules could save you some time and effort. As for new instruments arriving onesy-twosy to replace broken ones or en masse for a new project, we have yet to check any one devices’ calibration before installing it in the field. Projects have requested a “calibration certificate” from the supplier when sufficiently motivated to pay the $25-$35 USD markup. However, usually the “default” or standard factory calibration is trusted, and it goes directly to the instrument enclosure in the field. We may do a “zero” that should be performed on most pressure devices in place anyhow (and under line pressure for DP cells), or check the full scale where practical.
Faced with diminished manpower, budgets and pressure to minimize or eliminate overtime, end users can find ways to exploit the excellent accuracy and stability of digitally integrated field devices to reduce or eliminate unnecessary calibrations rooted in decades-old instrument technology.