This article was originally published in our sister publication Chemical Processing.
By Robert N. Dubois, consulting analytical specialist
"When men got structural steel, they did not use it to build steel copies of wooden bridges," wrote Ayn Rand in her book "Atlas Shrugged." Today process sampling systems can benefit from advances due to the New Sampling/Sensor Initiative (NeSSI) — so, we should ponder whether we're really taking advantage of these innovations or just building steel copies of wooden bridges.
The Center for Process Analytical Chemistry (CPAC) at the University of Washington, Seattle, in 2000 launched NeSSI. This ambitious undertaking aimed to address reliability problems (and, yes, bad reputation) of process analytical systems. Many people associate NeSSI exclusively with the miniature mechanical footprint, adopted from an International Society of Automation (ISA) SP76 committee standard. That's Generation I, which already is well established. Today there's much more. Generation II, now under a full head of steam, automates the sample system — and sets the stage for Generation III, widespread adoption of microanalytical devices (Figure 1).
Automating a sample system always has been a struggle. The first continuous analyzers and their "evil" accessory, the sample system, appeared in pre-World War II Germany. Today the analyzers themselves have become modern marvels of automation. However little has changed with the sampling system. We still rely on spring and diaphragm regulators, on/off thermostats, manually adjustable needle valves and visual indicators for monitoring and control. We invariably need to do routine field checks and adjustments. Indeed, it's not unusual for analyzer technicians to make daily rounds. Process analytical has never caught up with the automation used by our instrumentation and distributed control system (DCS) associates. Sampling systems are one of the last bastions of manual operation left in a modern processing facility. Why does process analytical remain an anachronism in a sea of automation?
Figure 1. Development Roadmap: The ultimate
In one company where I worked, some process automation folks called analyzers "the technology of last resort." But these folks also were part of the problem because they didn't want to handle the multiple diagnostic inputs needed to adequately monitor performance of an extensive process analytical system. Typical analyzer-to-DCS connections include component concentration signals and an analyzer fault contact (and sometimes a flow switch in parallel) to give the ubiquitous "analyzer trouble" alarm. However the majority of diagnostic elements such as sample take-off pressures, sample disposal pressures, sample flow quantities, heat-tracing temperatures used to maintain sample dew points, filter performance, calibration system check flows, analyzer shelter environmental alarms and analyzer utilities that contribute to overall analytical system reliability typically aren't monitored. We generally remain analyzer-centric in predicting or reporting a failure to the process operator. This impacts reliability because an analyzer is only a small part (and in many cases maybe the most reliable one) of a multi-element system. To make things worse, the signals sent usually are discrete, don't predict the problem and only give an alarm when it's too late to do anything — by that time the plant may be down. When we attempt to become system-centric and send multiple signals to/from the control room, the cost of sensors, actuators, wiring and additional input/output (I/O) automation points (especially for conventional 4–20-mA signals) becomes very steep.
To make matters worse, doing closed-loop control and adding logic functions (e.g., stream switching routines) using the DCS for sample system control invokes another layer of automation that's perceived as overkill by the process automation folks. Although many sampling system automation tasks (or applets) could be standardized across the process analytical discipline, we as an industry have yet to come up with an open modular solution. So even if we get our input (and output) signals serially to the DCS, programming costs tend to ramp up because of the need for custom programming.
An Ugly Secret
Many sampling systems handle hazardous fluids (such as hydrocarbons and hydrogen) and are packaged enclosures. Yet the electrical engineer going out to map the plot plan for electrical hazardous area ratings doesn't classify the inside of an enclosure as a Division 1/Zone 1 environment — but that's what it is. We've learned to design and package our sample systems using a potpourri of protective methods and wiring techniques to allow at least some degree of automation — e.g., explosion-proof enclosures, air and inert purging, hydrocarbon gas detector interlocks, equipment encapsulation, intrinsic safety, filled conduit seals, rigid conduit and armored cables. Having to meet exacting requirements of various global electrical certification agencies intensifies the problem. It's difficult and expensive to automate a sample system to meet requirements of a Division 1/Zone 1 area.
There are instances where we have used 4–20-mA analog signals to send pressure, temperature and flow signals from our sample systems to the DCS. (In NeSSI-speak, we call this generation 1.5). However this requires extensive wiring in a confined space; significant cost and effort need to go into design of cabling, intrinsically safe barriers, wiring and conduit to meet the electrical classification. In some cases as many as 30 I/O may be required to adequately monitor and control a system.