2003's top trends in process control have little to do with new hardware or software technology. Instead, they concern better ways to do your job. They involve acquiring, sharing, moving, and analyzing data, all for the purpose of making products more efficiently and making your plant more competitive. We've identified a number of trends that are ready to help you do just that.
The frameworks concept is new, having been introduced in 2002, but Web Services, Ethernet, wireless, condition monitoring, and performance optimization are not. Some have been around for years. Even so, refinements are making them work increasingly well.
These days, economics are forcing you to do more with less. Therefore, two of the most significant trends we see are condition monitoring, in which you use advanced sensors and data acquisition to track the health and performance of plant equipment, and performance optimization, where you acquire and analyze process data to find ways to make product more efficiently. Each of these techniques offers a tremendous potential return on investment and can help make your company more competitive.
Accordingly, we've grouped our trends into three areas:
* Data everywhere: Trends in obtaining and distributing plant floor data via Ethernet and wireless.
* Information everywhere: Trends in gathering data, organizing it, and moving it to higher-level platforms via Web Services and frameworks.
* Decisions everywhere: Trends in practical ways to do something with all that data, including field-based control, condition monitoring, and performance optimization.
Trend: Industrial Ethernet Absorbs Competition
Remember the 1958 movie "The Blob?" While Steve McQueen tried to warn the good people of Downingtown, Pa., that a nasty creature was eating the town, the Blob slithered along, devouring everything in its path. It was virtually indestructible and impervious to all weapons. Ethernet is like that. Except it's even more adaptable than the Blob was.
Ethernet has proven impervious to new networking technologies, immune to marketing programs, resistant to efforts to lock users into proprietary systems, and able to survive in all environments, including harsh industrial plants. You won't stop Ethernet with a blast of cold air. Raise an objection about what Ethernet can't do and, a few months later, it does it. Ethernet is not so much a trend as it is an irresistible force of nature whose time has come.
Like the Blob, it simply absorbs its opponents in the automation and process control world. Networks such as Foundation fieldbus, Profibus, and Modbus now all have a version that runs on an Ethernet network. Because Ethernet is a multi-protocol network, more than one network protocol can be transmitted over the same set of wires. All the nodes on the network can send and receive all messages, but until recently, only a Modbus/TCP device could understand a message sent from another Modbus/TCP device.
A new software package called OPC Data eXchange (OPC-DX) now makes it possible for any Ethernet device to talk to any other Ethernet device, regardless of the protocol being used. This opens the door to putting Profibus, Modbus, Foundation fieldbus, EtherNet/IP, and all other Ethernet-based protocols, proprietary or not, on the same system. OPC-DX is like the Babblefish in The Hitchhiker's Guide to the Galaxy or the Universal Translator from Star Trek.
The old objections to Ethernet, such as lack of determinism, security, and ruggedness, have faded. Determinism problems are minimized with higher transmission speeds, star networks, switches, and routers. You can configure an Ethernet network with a small number of devices to increase response speed, and you can physically separate various parts of the network for security. Industrial-grade hubs, switches, routers, connectors, cables, and similar equipment are now available for use on the plant floor.
Finally, recent developments to put power into the unused wires of an Ethernet cable should make it possible to run Ethernet into hazardous areas, power two-wire loops, and supply power to sensors. When that happens, Ethernet will have conquered all the networking worlds, from the plant floor to the boardroom.
ARC Advisory Group (www.arcweb.com) agrees. It predicts a phenomenal 84% annual growth rate for device-level industrial Ethernet products over the next five years.
In the movie, the Blob couldn't quite catch the teenage hoodlums, because they moved too fast. What's left among Ethernet's network competitors are like that: fast, resourceful, and quick to find a niche. Safety networks, SERCOS, and device buses are all in that position. But the Blob keeps pushing them into that little corner of the basement, where there's no place to run. As they say about the Borg in Star Trek TNG, "Resistance is futile."
Trend: The Sky's the Limit for Wireless
Data cannot be efficiently gathered from and distributed to all points as long as data acquisition and control systems are tethered by wires. Albeit from a low base, many pundits predict wireless will be the most dynamic segment of the control systems market for the foreseeable future.
A September 2002 report from Venture Development Corp. (www.vdc-corp.com) forecasts product shipments for wireless monitoring and control in discrete and process manufacturing applications to increase from $109 million in 2001 to $752 million in 2006. This represents a compound annual growth rate of more than 47%, impressive growth anywhere but especially in the process control market where overall growth is projected at less than 5%.
There are two main components of the process control wireless market: data acquisition and wireless HMI. Wireless data acquisition systems gather information from field-based sensors and instruments via wireless signals. These systems have long been used in the electric power, water/wastewater, and oil and gas pipeline industries.
These industries have widely dispersed sites and are a natural fit for wireless. Typical implementations feature a local controller hard-wired to a packaged wireless system operating on a licensed frequency. Some installations work without a local controller by making use of the control functionality inherent to many packaged wireless systems. Two-way communications with a central base station allow for display of remote information on an HMI and for remote control and configuration of the controller.
Another method to establish two-way wireless communications with field instruments is through dedicated single-point radios. These typically have a 4-20 mA input for connection to the field device and a 900 MHz radio signal output.
Wireless HMI products use standard wireless appliances such as personal digital assistants and wireless application protocol (WAP) phones. These use the existing commercial infrastructure to establish two-way communications with a centralized HMI, usually via the Internet in a server (central HMI) and browser (wireless appliance) configuration. Most systems allow each interface to be custom-configured so presented information is relevant to that user's job function.
A major drawback of wireless HMI has been the difficulty of viewing web pages with a small screen, even with WAP. According to a recent edition of The Economist, this problem may be solved by Opera (www.opera.com), a Norwegian software firm that has devised a clever way to squeeze web pages onto wireless appliances so they look good and are easy to navigate.
Trend: XML and Web Services Pump Up the Data
Moving information from the plant floor upward to information technology software has always been a difficult problem for control engineers. As we pointed out in our March issue, "Jump Start IT" [CONTROL-Mar. 03, p28], you often threw up your hands, called in the consultants, and let them handle the interface from your control system to higher-level IT software such as ERP, SCM, and similar systems.
Today, however, market forces and technology have united, dropping a perfectly usable interface into your lap. For the first time in the history of systems integration, process control and IT software actually talk the same language and use the same communications method. XML is now the language of choice, and Web Services is the method for communicating.
Just about everybody in the control and automation world is adopting XML, including standards-makers like OPC, ISA, API, and World Batch Forum. These organizations are rapidly developing standardized ways to define batch recipes and data exchanges, while vendors are developing their own XML-based links.
The beauty of XML is that vendors can't make it proprietary. This has to be a first in our industry.
XML documents are sent over the Internet via Web Services, using the Microsoft .Net architecture that comes with new PCs. You probably already have .Net in your newer HMI/SCADA packages, historians, and DCSs, which makes it very easy for you to adapt the new technology.
The XML/Web Services Express train is rapidly picking up steam:
* SAP recently announced its NetWeaver software will be using XML, Web Services, and .Net technology. When the market leader of ERP adopts a technology, the rest of the world quickly follows.
* ISA held an OPC and .Net technical conference in April, in which the association took two days to explain how XML, OPC, .Net, and Web Services work. ISA only stages conferences for sure-fire topics.
* Invensys announced it has adopted the World Batch Forum's BatchML for use in its Baan Protean ERP and its I/A Series batch control systems, and it demonstrated a working system at the 2002 WBF meeting. Other DCS vendors are coming around.
Like Ethernet, XML and Web Services are unstoppable forces of nature that are going to dominate the process control industry.
Trend: Frameworks Offer Super Structures
Most of the major control systems vendors are promoting their own overarching architecture or framework to ease integration of plant floor and enterprise systems. All of these frameworks use industry standards such as OPC, DNA for Manufacturing, and .Net; and they all add to these standards with operating system extensions that provide functionality relevant to process control applications [CONTROL-Jan. 03, p50).
A framework is not a product available for purchase by an end user, and it is not an operating system. A framework is instead a set of tools, rules, and standards that internal and external developers use to create software applications and hardware products. It is a type of development system, and applications and products created with a framework development system will run on standard operating systems layered with framework operating system extensions.
Each vendor foresees a future in which all software applications and smart hardware products will be developed with its framework. Each of these vendors plans to use its framework for all internal product development. Each vendor would also like to see all third parties use its framework when developing control system applications. There is an obvious conflict here, and there will be winners and losers.
The winner may be the firm that develops the best framework and distributes this framework to others on the best terms. In another scenario, Microsoft's manufacturing practice might add to its operating systems some features needed for process control. If enough features are added and if the features are intelligently implemented, Microsoft could render control system vendor frameworks obsolete.
If Microsoft chooses to stay on the sidelines, framework vendors could still fail if third-party vendors are not convinced to use their frameworks. Frameworks would then be tools used primarily for internal development. In this scenario, the effect on end users would be minimal, although each framework vendor would benefit to the extent that its framework improved its internal software development process.
The worst thing that could happen would be a divided market with no dominant framework. Microsoft would not add process control functionality to its operating systems, and each of the major control system vendors would share the market equally. Each third-party vendor would use one of the competing frameworks. Products developed with a vendor's framework would work best when used with a control system manufactured by that vendor. This would create a type of proprietary system and move the market away from the ideal of a truly open system.
Trend: Control Returns to the Field
This is an old trend that is making a comeback with the advent of small, inexpensive, rugged controllers. The emergence of digital communication bus standards allows these to be networked together in a relatively seamless fashion, thus creating a truly distributed network of field-based controllers.
Most field-based controllers are fairly new products. A good example are smart valves with discrete and analog inputs and outputs along with an advanced communications protocol such as Foundation fieldbus. These valves can be used for local loop control and for control of related components.
A more familiar type of local controller is a micro PLC. These small PLCs have been around for a few years, but early models were supplied with a rudimentary communication port such as RS-232 or with a low-end proprietary port. It is now possible to buy an Ethernet-equipped micro PLC for less than $400. This can make micro PLCs attractive for many field-based control applications.
Yet another type of field-based controller is available from Phoenix Contact (www.phoenixcon.com) and others. These controllers can be field-mounted in most process environments without a control cabinet. They execute ladder logic, flowcharts, or other control strategies configured on a PC and downloaded to the controller. Smart I/O from firms such as Opto 22 (www.opto.com) and Rockwell Automation (www.ab.com/devicelogix) offer another example of field-based control.
The technology has arrived, but is there a real need for field-based control? Process control end users say yes, for a variety of reasons. Field-based controllers are faster and more deterministic because all processing power is dedicated to local control. This can be critical for fast-acting reactions and control loops.
Field-based controllers can also save wiring costs, even when compared to remote I/O. This is because less information needs to be transmitted from a field-based controller as compared to remote I/O, so communication networks can be simpler and less expensive.
Many users say reliability is the main reason for using field-based controllers. Although these controllers are typically connected to a central controller and HMI via a digital fieldbus, most can continue to operate or at least fail-safe if the communications link is lost or if the central controller fails.
Ease of troubleshooting is a close cousin of reliability, and it is usually easier to diagnose faults in a subsystem with one loop and a handful of I/O as opposed to solving a problem in a huge, monolithic, centralized controller. Safety can also be enhanced because critical and non-critical interlocks and alarms can be zoned according to the controllers' physical distribution.
Trend: Condition Monitoring Goes Comprehensive
The path from incipient failure to preventive maintenance is becoming shorter and more automated thanks to self-monitoring smart field devices and standardized networking addresses for diagnostic messages, artificial intelligence-based monitoring and decision-making software, and computerized maintenance management systems that write and prioritize work orders. These increasingly accessible efficiencies are major tools in the perennial battle for improved asset utilization, uptime, and productivity.
The revolution started several years ago in the field with increasing numbers of smart transmitters capable of storing and serving up trouble codes via HART and/or Foundation fieldbus, but has been limited by a lack of user-friendly, standardized, system-supported messaging. Increased levels of interoperability and integration with control and information systems are becoming reality thanks to recent and ongoing efforts by protocol problem-solvers including the Fieldbus Foundation, HART Communication Foundation, Profibus users group (PNO), and the Field Device Tool (FDT) Joint Interest Group, in alliance and cooperation with various vendors and major end users.
"[Foundation] fieldbus has taken much too long to be embraced fully, and the primary reason is its seeming complexity and very poor tools," says Scott Bump, director, fieldbus program, Invensys Foxboro (www.invensys.com). "FDT will help us to solve those problems and let us move on to the next great set of technologies." Soon we will be able to presume systems will have access to all the smart transmitter data, all the time.
From handheld calibrators to asset management packages, tools formerly limited to occasional configuration, calibration, and diagnostic applications are becoming "health monitors," says Louis Szabo, vice president of marketing & sales at Meriam Instrument (www.meriam.com). "When you look at the plant lifecycle cost of instrumentation and final control elements, 20% is setup and commissioning, 80% is while the plant is running." The focus has been on setup and commissioning. "We're working with the DCS companies to get integrated to give them the lifecycle piece." Testing can be done in-place--devices are not taken out of service and brought into the repair shop, reducing downtime.
Online condition monitoring capabilities are being enhanced by increasing applications of vibration, pressure, sound, voltage/current, and other sensors that, along with their monitoring systems, detect deteriorating equipment from crispy commutators to brinelled bearings.
Standing watch over entire operations, monitoring packages use artificial intelligence to analyze normal operating patterns, detect anomalies, and alert operators or maintenance personnel who, with the click of a few keys, can authorize a work order through a computerized maintenance management system (CMMS).
Snuggled in tight with compliant suppliers, CMMS software collaborates with inventory control and supply chain systems to secure necessary parts and materials, prioritizing and scheduling work order executions to minimize costs from shipping to downtime to overtime.
The capabilities emerging in condition monitoring and asset management systems call some of the most basic process control precepts into question. For example, "Why redundancy?" asks Szabo. "The traditional response is, stuff fails.' But the fact is different devices in different applications (for example, different safety integrity levels), have different maintenance requirements. Monitoring allows optimization of maintenance resource allocation, from as-needed to preventive." At its purest, successful condition monitoring can mean the "stuff" you rely upon for profitable operations no longer fails.
Trend: Engineering and Finance Converge on Performance Optimization
Data becomes information that supports decisions, and the trends discussed so far promise to dramatically increase the quantity and quality of fuel for decision-making. The role of performance optimization is to use that fuel to be sure decisions are based on the organization's true priorities and made in consideration of all the relevant parameters.
A recently released report, "Enterprise Applications Outlook for 2003: The Performance-Driven Enterprise," by AMR Research (www.amrresearch.com) suggests companies turn themselves into "performance-driven enterprises that lower product and overhead costs, improve asset utilization, and bring a higher return on invested capital."
According to the report, in an otherwise flat IT market, enterprise performance management spending in 2003 will be strong, supporting market growth in business intelligence and analytics of approximately 23%. By 2006, this market is expected to grow to $12.2 billion.
To be accepted, performance measurements have to be in terms your shareholders, stock analysts, and MBA CFOs can understand, which usually involve dollar signs. "Performance monitoring is coming out of engineering into finance," says Peter Martin, vice president, performance management, Invensys. "Until it does, it will not be given the value it deserves."
Based on methods and measures that haven't changed in 100 years, today's finance systems are not able to accurately reflect many important nuances of operations that can make the small difference between profit and loss in highly marginal process plants. Monthly data is not enough. Companies need real-time data at the plant level tied into accounting information.
But don't expect any non-engineers to be interested in traditional process optimization terms or energy or material balances, Martin says. "In effect, CFOs tell me, If one more engineer invents one more key performance indicator [KPI] without going through the finance system, we'll kill them.'"
Instead of engineering KPIs, think in terms of profit points at intermediate steps. Understand the costs of each step--raw materials, energy, production time, equipment maintenance, etc."and measure the performance of individual operations in terms of profit in dollars. Provide those measures to operators, and show them how the way they run the plant can affect the results.
The synergy is in the convergence of finance and engineering disciplines. Noted author and management consultant Peter Drucker asserts that most innovation takes place when you take two well-established disciplines and converge them. Process control engineers need to work with finance people to create measures and systems that make sense to both, claims Martin, but so far, such communication is rare. On the occasions where he has met with both camps at the same time, he says, "They don't even know each other's names. They have to introduce themselves to each other."
Once you work with Finance to understand what's needed, "There's nothing hard about it," says Martin. "You just need to have and provide accounting information to let engineers and operators understand how their activities affect the bottom line. In many ways, we just haven't been telling them what good' is."
Trend: Your Changing Role
Of course, many other trends exist in our world and are having a profound effect on your job. For example, we are all aware that customer service, support, and engineering functions are rapidly leaving the U.S. and are being outsourced to other countries, where engineers are paid a fraction of what U.S. engineers make. Some engineering and construction firms are outsourcing work to Mexico, and much software development, engineering, and IT work is going to companies in India. SAP Labs India, for example, will soon be SAP's biggest development center.
Procter & Gamble, Cincinnati, announced in June 2002 that it was outsourcing its IT functions to outside companies by the end of the year, one of which is in India. Control engineers at P&G may be talking to IT programmers in Bangalore, India's answer to Silicon Valley.
On average, 60% of IT spending goes to outsourcing companies. This means process companies are laying off their IT staffs. As a result, you may be called upon to handle more IT interfacing duties in the future. Control engineers doing IT work such as ERP, SCM, or CRM may be one of our big trends next year.
We've also noticed that process companies are outsourcing much of their instrumentation and control design, networking applications, maintenance and other functions to the vendor community these days. We attribute this to economics, engineer layoffs, and higher workloads for instrument and control engineers at process plants.
As we've pointed out several times, new technologies such as Web Services and networking are not all that difficult for a control engineer. However, there is just so much a small plant engineering staff can tackle, so you are farming much of the work out to specialists.
Those specialists are getting overworked, too. One reason many new devices come with an Ethernet port and an embedded Web server is so vendors, OEMs, and systems integrators can access information remotely for diagnosis, troubleshooting, adding upgrades, configuration, and tuning from afar. It lets specialists do their jobs at a server in a central location instead of making field service trips.
Remote maintenance is a definite trend, but it's spotty right now. Perhaps the widespread acceptance of Web Services will give remote maintenance the push it needs to emerge as a major trend next year.
Colocation (server farms) is another trend waiting in the wings. High level software such as ERP, SCM, supervisory control, process historians, and many other packages will someday be on servers in remote locations, instead of being installed in a process plant. The software could be 6,000 miles away in a secure building, and work just as well and as fast as the same software in a computer next door. You'll rent time on the software instead of buying it, so you'll get all the capabilities at a fraction of the price.
We see CAD companies offering such packages via remote servers, and some process companies offering services such as process monitoring, advanced loop tuning, and asset management via servers, but the practice is still very limited. Telecommunications and IT companies have been using colocation for several years but, as is our wont, the control industry is slow to follow. End users are wary of security problems and don't like to give up control. Economics may force users to colocation one of these days because many companies cannot afford the luxury of advanced hardware and software.
There is one ongoing and overarching technology trend that helps all of us do our jobs better and makes our lives easier. Better, cheaper, smaller, faster is a fact of life for all electronic devices. For example, a simple price/performance comparison of a $2,000 PLC available today with a $2,000 PLC available just two years ago reveals startling advances in performance. It is easy to take this trend for granted, but it is perhaps the most important and far-reaching positive technology trend affecting the process control industry.
Taken together, this year's technology trends offer enormous power and great opportunities for process control professionals who want to visibly and measurably improve plant performance.