IN OUR JULY 2005 issue, we noted that the servers are coming (“Remote Control,” CONTROL, July 2005). Soon, you will be able to control remote plants, pipelines and offshore platforms from afar, and you’ll be dealing with IT, condition monitoring, loop tuning, and process modeling software that runs in remote servers. In new systems, none of the enterprise-level software has to be in your plant. It can all reside on a server at your central engineering center, a vendor’s support center, or at a specialty supplier.
And, for truly remote systems, you won’t even need a local control room or an HMI/SCADA system. A portable laptop will suffice as an HMI when you or a tech has to visit the site.
All this requires segregating the real-time, field-based part of a process control system from the enterprise side, and making the real-time portion of the control system into a fortress that’s immune from attack by Internet creatures.
|FIGURE 1: CONTINUING CONTROL|
A Fisher control valve with Fieldvue is installed on a petroleum feedstock flow at a Texas oil refinery. It can control PID loops all by itself, thanks to fieldbus. If the fieldbus link to the host is lost, loop control continues. Source: Emerson Process Management
This is not a far-out concept. Some plants have been doing it for years. "Why not rely on field devices for control?” asks John Rezabek, control engineer at ISP Lima BDO Manufacturing in Lima, Ohio. “We're relying on them anyhow. Our site has logged five years with 80% of PID being solved in valve positioners. H1 fieldbus is much more robust than we anticipated."
In this article, we’ll look at some of the distributed intelligence that makes it all possible.
A Mighty Powerful System
Mighty River Power’s hydroelectric operation in New Zealand is a perfect example of a distributed control system, a single server in a central location, and unmanned, remote controls. At Mighty River, nine hydroelectric stations are each monitored and controlled by individual Honeywell Experion PlantScape PKS servers with the Distributed System Architecture option. The systems provide data via microwave and a leased fiber optic link to twin, redundant SCADA systems.
A single Experion central server in Hamilton, NZ, accesses whatever data it needs from the SCADA database, and runs enterprise software for the plants. This includes plant optimization, water optimization, inflow prediction, outage management, electronic dispatch and performance management enterprise software.
According to Mark Harvey, controls engineer at Mighty River, the Automation and Remote Control (A/RC) project was originally justified by estimated savings in labor costs and the increased efficiency of water usage. “The benefits considered at the time have proven to be wildly underestimated,” says Harvey. “Prior to the AR/C project, each of our nine power stations had 20 to 30 staff, with the larger stations having considerably more.” They also had seven people in Hamilton and 24 people in a centralized maintenance station. “Today, all maintenance is outsourced, and most stations have no staff on site for days at a time.”
What’s more, in case of a system failure, each power station is able to fully control any other power station via its local Experion server, the distributed network, and Honeywell’s Downtime Option. As a final advantage, by using a central server, Mighty Power has to pay only one license fee for the Downtime Option and the Network Server. Doing it the “old way” would require separate license fees for those functions in all 10 locations.
They plan to introduce tools such as an alarm/event manager and equipment health monitoring, both made possible by the distributed architecture.
Going to server-based, remote control, unmanned systems like Mighty River’s is the wave of the future. But to ride the wave, you’ll need distributed intelligence to run the actual plant. One way to accomplish this is to distribute the controller intelligence as far down into the process and as far out into the field as possible, making various nodes, components and systems as autonomous and self-sufficient as possible, and immune to network and communications interruptions.
How Low Can We Go?
Steve Garbrecht, product manager at Wonderware points out that segregating control systems is nothing new. “If we look back to the invention of the DCS by companies like Honeywell, the original architects designed two basic layers to the system: a regulatory control layer and a supervisory layer,” he points out. “It would have been easier to design a single layer so I can only assume that they had a good reason for it.”
Perhaps they envisioned, 30 years ago, the ability to truly distribute a system as low and as far out as possible. Today, microprocessor technology makes it possible to put monitoring, diagnostics and control logic all the way down to the sensor level. “As sensors go digital--and the transmitters that house them contain microprocessors--information, diagnostics and computational abilities become available at the device level,” says Bruce Jensen, manager of systems marketing at Yokogawa.
"Continuing the trend that began more than a decade ago, microprocessor-based intelligence is becoming increasingly more distributed throughout refineries, petrochemical plants, power plants, pulp and paper mills and other continuous process plants,” says Alex Johnson, system architect, Invensys Process Systems. “This has had, and will continue to have, a major impact on the way these process plants are engineered, operated and maintained. Intelligence will clearly keep moving into the field.”
“With today’s semiconductor capabilities, extreme miniaturization, and exponentially increasing processing power, the ability to embed intelligence or decision-making capabilities into smaller and smaller devices is not only an option, but a requirement,” adds Gricha Raether, industrial control and distributed I/O product manager at National Instruments. “There are currently hundreds of smart sensors on the market that feature built-in microprocessors. These sensors acquire the signal, convert it to a digital signal, then transmit it to a controller or central monitoring system through a fieldbus or industrial protocol.”
“In the case of fieldbus, distributed intelligence is really the name of the game,” says John Yingst, Experion product manager at Honeywell. “We presently have fieldbus devices that can do calibration, diagnostics, control, and then alarm when there is a process control or a device problem. Device warnings range from telling us they will soon need maintenance, to loss of an air supply, to overheating, and all the way to letting us know of complete sensor or actuator failure. Control algorithms complete with alarming running in fieldbus devices is commonly know in the fieldbus world as ‘control on the wire.’ Control functions include PID, totalization, signal characterization, signal splitting, input selection, and general-purpose math.”
Terry Krouth, vice president of PlantWeb Technology at Emerson Process Management, adds that fieldbus is designed to be inherently redundant, and to operate independent of a host. "With a Backup Link Active Scheduler (BLAS) in one of the devices on each segment, a fieldbus system can operate without connections to the main system," he explains. A Link Active Scheduler (LAS) controls all the communications in a segment, ensuring deterministic response within the segment and the system. "If the main LAS fails, the designated BLAS takes over."
Krouth agrees with Yingst that fieldbus is perfectly capable of running process loops. "In many cases, PID control in the field can operate without needing a host of any kind."
Smart devices put paid to all the traditional methods we’ve been using to configure control systems. Traditionally, we’ve wired field I/O to a termination rack, connected it to signal conditioners, fed it to multiplexers, and transmitted the collected data via a “home run” network to a central or local control system, which logged all the data, stored it in a process historian or database, ran it through assorted software processing routines, and put it up on an HMI’s display for operators to see.
|FIGURE 2: DAQ AT THE SENSOR|
This Beckhoff Bus Terminal at Shanghai Drainage, Shanghai, China, acquires eight channels of data in the field and sends it to the control system via DeviceNet, eliminating all the usual I/O cabling, terminations and enclosures. Source: Shanghai Drainage
It’s even possible to bypass everything from the field wiring to the central HMI/SCADA system. Today’s sensors and control devices sometimes have their own web servers embedded, so they can jump on the Internet or a plant Intranet, and make their data and control settings available to any legitimate user with a standard Web browser.
“Distributed intelligent control systems are giving engineers a new choice to control their projects,” says Terry Lenz, senior product support engineer at Wago. “It’s time to rethink how the large local/remote rack PLCs control I/O.”
Lenz points out that the traditional method for PLCs (and other control systems) was to connect I/O to remote racks and send the data to a central PLC via a proprietary network. “When distributed fieldbus networks were introduced, this let smaller nodes of I/O be placed closer to the devices or process, but it still involves scan time to gather the information to the main PLC, process the data, and reply to the I/O. Decentralized intelligent devices are like remote PLCs controlling the I/O locally. Although they are connected to the network they operate independent from the network scan times. Decentralized control reduces scanner load because it only passes data the main controller needs.”
“Not all field-based intelligence will revolve around PID loops and not all intelligence will utilize the same communications mechanisms,” notes Johnson. “Foundation fieldbus is just one of the many technology enablers behind this trend toward distributed intelligence. Others include industrial Ethernet, OPC, Profibus, HART, Modbus and, increasingly, the emerging wireless communications standards. Each has strengths and weaknesses for different distributed intelligence applications, with the trade off often being cost vs. security. For example, wires are secure but they have a very high cost and, in some implementations, very low bandwidth."
Every month, we publish new products that feature fieldbus, internet or web connections. As an example, last month (August 2005) we ran an item about Red Lion’s new Master Controller, which lets you connect 32 PID loops as a single node on Ethernet, view and control loops via a web server and a virtual HMI, or link to the device via normal interfaces, GSM wireless devices and a cellular phone.
None of this is particularly expensive, either. Advantech’s XScale communication controller has a web server, FTP server and TelNet server, plus RS232, RS485, 10/100BaseT Ethernet and Modbus interfaces, and sells for $595. Both Wago and Beckhoff offer fieldbus- and Ethernet-compatible controllers starting in the $150-$405 range (See Figure 3).
|FIGURE 3: INEXPENSIVE CONTROL|
Distributed fieldbus I/O and controls, such as Wago’s PFC (programmable fieldbus controller), installed in a pharmaceutical plant, start at about $405. Source: Wago
Far Out, Tovarisch
Transneft operates the world’s largest oil pipeline, encompassing more than 30,000 miles of oil pipe that transports 4.2 billion tons of oil per year across Russia, from Siberia to the Baltic (See Figure 4 below). The Trans-Russian oil pipeline has 350 pumping stations and 850 holding tanks supplying 35 refineries. Iconics Genesis 32 software runs the entire system, including 600,000 tags. It is claimed to be the world’s largest PC-based SCADA and dispatch system, although Citect claims one in the same size range in Australia.
As an example of how far out you can go with distributed control, the system has 2,500 PLCs and 500 networked PCs spread out over those 30,000 miles, all linked by 20 satellite data links, microwave and land telephone lines using standard TCP/IP protocol. The 1,500 operator screens have an average response time of 3-5 sec.
Pipelines, offshore platforms and water/wastewater facilities are the most obvious examples of process applications that have widely dispersed control and monitoring equipment, but many other plants are also going distributed, says Todd Stauffer, manager of product marketing at Siemens Energy & Automation. “We are seeing more customers migrate from the centralized architecture of the classical DCS, where all controllers, I/O and field wiring terminations are in a central location, to where the DCS equipment is distributed throughout the facility in remote locations that are chosen to maximize maintainability and minimize installation cost,” he says.
Modern wireless communications technology makes such distribution possible, because it eliminates field wiring altogether. “Myriad new technologies are being developed to increase distributed communications,” says Raether. “GPS and GPRS have been around for years, and there are more and more devices that can talk to and listen through these networks. The latest to join the list are WiFi networks such as 802.11b, a, g, and even n. Even more popular are longer-distance networks such as WiMax and xMax. There are also more and more devices that include the ability to communicate through these networks, and they can be installed anywhere electromagnetic waves can reach.”
Plugging devices into the Internet or a virtual private network over the Internet makes it possible to reach anywhere on the globe that the Internet goes.
Some plants use a combination of communications. Wates GmbH in Meschede, Germany, is a consulting engineering company that specializes in water and wastewater; it designed a control system for the Bruck Sewage Works in Bruck, Germany. “We used the full range of access options, including dedicated lines, dial-up and GSM,” says Henry Sanders, managing director of Wates.
“Making communications technology part of an embedded process control system is fairly easy to accomplish given the processing power available in such devices,” adds Raether. “One example of such a device is the recently-released module which S.E.A. Datentechnik GmbH in Germany released. This module enables the NI CompactRIO embedded control system to communicate via GPRS, GPS, and radio clock frequencies.”
FIGURE 4: AS DISTRIBUTED AS POSSIBLE
The Transneft oil pipeline spans 30,000 miles, has 350 pumping stations and 850 holding tanks supplying 35 refineries. It is controlled by 2,500 PLCs and 500 networked PCs running Iconics software, all linked by satellite data links, microwave, and land telephone lines. Source: Iconics
Watching over the Controls
Building a server-based control system requires that the real-time process control portion be extremely reliable and able to carry on without supervision when the link to servers goes down. Or, as Stauffer puts it, “This approach is only feasible if the DCS equipment has a high level of built-in diagnostics and intelligence that allows it to report faults or potential problems to the operations team.”
“With the advent of intelligent devices, information and diagnostics can be used to predict maintenance behavior,” adds Jensen. “Predictive diagnostics, for example, let a valve tell maintenance personnel that it should have maintenance in the next number of days. The application of the diagnostic engine and the algorithms and methods of prediction are just being developed. This is in its infancy.”
Or maybe you should step up and handle things. "I think it's time the process control department took back fieldbus from the instrument shop,” says Rezabek. “There are many instances where distributed intelligence, control and single loop integrity contribute more to process availability than diagnostics alone."
You may need to fix some problems remotely, says Pat Kennedy, president of OSIsoft. “To be truly remoted, we need remote maintenance of these systems, which means secure access to the configuration information and tools for maintaining the configurations,” he says. “We also need exchange of not just data but also structures, graphics, configurations, and many other kinds of information. For example, if a control system does not share which entities comprise a loop, graphic, advanced control or report, then trying to remotely use the embedded information is not possible.”
“Running software remotely is not a problem nor is viewing it provided that you have a good network and the software is designed either for remote operation or supports tools like Remote Desktop Connection [RDC],” adds Kennedy. “However, to maintain that software you have to have good knowledge of its current state, security issues, potential problems, and configuration changes, plus the ability to fix issues without traveling to the site. It will be a long time -- if ever -- before this is multi-vendor, so look for products that are designed to run remotely with appropriate attention to back up operation, redundancy, fault avoidance for network problems, configuration management, and so on.”
Finally, field equipment with smart diagnostics might overwhelm an operator, points out Lane Desborough, marketing manager at Honeywell. “Consider that today an analog point in a control system can have on the order of ten configured alarms, such as high, low, rate of change, etc.,” he explains. "Abnormal Situation Management consortium research shows that operators are overwhelmed with the volume of alarms which happen during an abnormal situation. Now picture a plant where each device has two hundred alarmable parameters and events. Imagine the amount of additional load this is going to place on an already overtaxed operator. Impending signs of catastrophic events are going to be buried in a sea of spurious alerts from NAGs - nuisance alarm generators.”
Desborough continues, “If the distribution of control actions, alarms and events to thousands of ‘smart’ field devices results in the operator becoming the first line focal point for coordinating activities among these thousands of smart devices, then the operator is going to be a very busy person. Likewise, if the smart devices are automatically sending work orders to the maintenance system, where is the maintenance planner getting the process knowledge and business insight to prioritize these actions for maximum business benefit?”
There is no doubt that remote, unattended control systems pose maintenance problems far different than those in a staffed plant. However, modern software makes it possible to run all device and system alarms through alarm management programs that sift out the spurious from the critical. Also, one feature of a server-based remote control system is that maintenance and operations can be staffed around the clock by experienced engineers and technicians at a central location who can bring vast knowledge and resources to bear on a problem.
As Jensen says, diagnostics of remote systems is in its infancy, and managing the maintenance of unmanned sites may be the biggest problem of all. As we note in this month’s Control Report, maintaining a modern distributed control system, with its myriad embedded processors, can be a nightmare.
The Importance of Being Open
While we’ve concentrated on describing remote, unattended systems, very few control systems operate in true standalone mode. Almost all need to be supervised, monitored, analyzed and fine tuned. As Desborough puts it, “What happens if the business objective suddenly changes from ‘maximize throughput’ to ‘minimize energy consumption’ and, as a result, all the control loops have a sudden change in performance and each take independent action to notify the operator or maintenance guy? How is the business context, the goal, communicated down to the individual controllers in order to suppress alerts?”
Enterprise-level software makes it possible to analyze the workings of a plant and coordinate supervisory changes when a situation like Desborough posed occurs. Such software includes loop analysis, maintenance management, asset management, MES, ERP, SCADA, historians, and a host of specialty software for various industries.
This works best if the hardware and software are not proprietary to a vendor or, as Stauffer puts it, “Being open is absolutely critical. Openness allows users to realize the benefits of distributed intelligence. If intelligent field devices from different vendors cannot coexist and effectively communicate their full set of data, then the potential benefit of distributed intelligence is lost.”
Kennedy and others say that OPC – although far from perfect – is a good answer: “OPC as it stands has fueled a geometric increase in the amount of information that is available in real time – nearly 50% of the interfaces we sell today are OPC,” says Kennedy.
Raether agrees: “Engineers and technicians in charge of choosing and integrating products from hundreds of vendors providing hardware and software are challenged with making sure that they all and play nice and talk to each other. There have been multiple efforts in the industry to make equipment from different vendors compatible with each other. One such very successful standard is OPC.”
And then there is fieldbus. "Foundation fieldbus is a vendor-neutral DCS,” says Rezabek. “Function blocks are not as extensive as what may be available on the DCS or PLC level, but that's mainly because too few of us are making extensive use of it. Once users catch on, I think suppliers are likely to respond to the demand for more ‘blockware’."
Distributed vs Standalone
We’ve concentrated on distributed systems here, but standalone control systems still have a place in the sun. Some engineers stand by their traditional standalone control systems. “For our relatively simple process control needs, we have discouraged distributed control, believing that it is better to have a central PLC-based master control that can be monitored and modified as needed,” says Doug Rhodes, manager, electrical power & automation group at Dayton & Knight, North Vancouver, B.C. D&N is a consulting engineering firm that specializes in water and wastewater.
“We want to maintain control of the control system,” continues Rhodes. “Since a loss of communication with the PLC can usually be tolerated for a short period, distributed systems are not deemed necessary.”
Standalone systems can take advantage of all the modern hardware that makes distributed control possible. They can operate unattended as easily as a distributed system can, communicate to far-away enterprise servers, and take advantage of device diagnostics, wireless networking, web servers and redundant architectures. An unattended remote distributed system is a standalone system when communications fail, so the distinction between standalone and distributed is getting blurred.
What is clear, however, is that the face of control is changing. There are many challenges remaining – getting systems from various vendors to talk to each other, making sense out of equipment diagnostics, and managing alarms – but true remote control is becoming more practical every day.