Ensuring Accuracy in Analytical Instrumentation Systems

A New Three-Part White Paper by Swagelok Focuses on Understanding and Measuring Time Delay, Calibrating the Analyzer, and Maintaining Representative Samples

1 of 2 < 1 | 2 View on one page

By Jim Montague

Securing and analyzing process samples has never been a walk in the park, but making sure those samples reflect precisely what's going in process applications is even harder. There are endless ways for samples to be taken incorrectly, performed at the wrong time, mishandled or otherwise compromised, and, there are apparently even more ways to goof up analyses, misinterpret results and make bad decisions as a result.

To help users overcome these challenges, Swagelok Co. (www.swagelok.com) has published a three-part white paper, "Ensuring an Accurate Result in an Analytical Instrumentation System," by Doug Nordstrom and Tony Waters. All three sections are available in our white paper library at www.controlglobal.com/AnalyticalInstrumentation.

Understanding and Measuring Time Delay

The first paper reports that process measurements are instantaneous, but there's always a cumulative time delay from the tap to the analyzer. Because this delay is often underestimated, not accounted for or not understood, it's the most common cause in sample systems of inappropriate results from process analyzers. "The potential for delay exists in the following sections of an analytical instrumentation (AI) system: process line, tap and probe, field station, transport line, sample conditioning system, stream switching system, and analyzer," states Part 1. (Figure 1)

To help reduce this delay and get better results, Part 1 adds it's best to locate the tap as close to the analyzer as possible. "For example, the tap should be located upstream of sources of delay, such as drums, tanks, dead legs, stagnant lines or redundant or obsolete equipment. Further, the tap location should provide enough pressure to deliver the sample through the transport lines or fast loop without a pump."

To calculate time delay in the transport lines, fast loop or process line, Part 1 says to employ this formula:

  • Fluid velocity = volume flow rate/line volume per unit length
  • Time delay = line length/fluid velocity.

But because gas is compressible, the formula for calculating time delay for a gas in any section of its line contains an added variable for pressure. The higher the pressure, the slower the flow, so:

  • Gas velocity = (volume flow rate/line volume per unit length) x (pressure at flow meter/pressure in the process line)
  • Time delay = line length/flow speed.

Also, pressure must be taken at the same place as the flow rate is measured. As a result, the flow meter is usually positioned near the disposal.

Part 1 adds that probes also add to time delay in AI systems, so they should only be long enough to reach into the middle third of the process line. In addition, Part 1 covers:

  • Field stations in transport lines or fast loops, where time delay can be reduced by reducing absolute pressure.
  • Stream-switches that purge old sample material, then move new samples to the analyzer. These often consist of double block-and-bleed (DBB) valves in cascading configurations, which can switch streams with minimal dead legs and cross-stream contamination from leaking valves.
  • Sample-conditioning systems that are usually small, modular, top-mounted components made to comply with the manufactured ANSI/ISA 76.00.02 standard according to the New Sampling Sensor Initiative (NeSSI).
  • Analyzers that include slower gas chromatographs or faster infrared and ultraviolet analyzers, and whose processing time also needs to be added in.

"Remember, it's the total time from the latest step in the process line to the analyzer that matters, and that all components making up this distance must be added to the total," states Part 1.

Calibrating the Analyzer

Because analyzers in many AI systems don't give absolute measurements, they must rely on calibration. Part 2 reports that users must understand how error or contamination can be introduced through calibration; when calibration can or can't address a performance issue with the analyzer; how atmospheric pressure or temperature fluctuations can undo the work of calibration; and when to calibrate or not.

"One common problem in calibration is incorrect system configuration. Calibration fluid often is mistakenly introduced downstream of the stream selection valve system and without the benefits of a DBB configuration," states Part 2. "A better place to introduce the calibration fluid would be through the sample stream selection system."

To explain the limits of calibration, Part 2 adds that analyzers must first be precise and yield repeatable results, and then they can be calibrated for accuracy. However, even precise and calibrated analyzers can still be inaccurate when they get fooled by positive interference—reading substances that should be excluded—or by negative interference—not reading substances they should. "The solution is to remove the source of interference by introducing a buffer solution," adds Part 2.

In addition, Part 2 reports the best method for calibration employs an automated system of regular validation along with statistical process control. Validation means taking and recording readings to help find trends and provide alerts.

1 of 2 < 1 | 2 View on one page

Join the discussion

We welcome your thoughtful comments. Please comply with our Community rules.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments