Data into information

When designing information transfer systems, Editor in Chief Walt Boyes believes it is critical to be able to weigh properly the value of each piece of information.

By Walt Boyes, Editor in Chief

Share Print Related RSS

Most of us are DROWNING IN OCEANS OF DATA,”

management process guru Dr. Eli Goldratt writes, in his book The Haystack Syndrome, “so why does it seem we seldom have sufficient information? This maddening dilemma of our technological age is a factor in every important decision, and an issue we expect to have addressed by modern-day information systems.”

Goldratt’s dilemma is going to be made considerably worse in the very near future by the coming widespread adoption of mesh networks and “mote” and “i-Bean” type sensors (see Distributed Intelligence, Developing Your Potential, and Technically Speaking). Although these new sensors are making it possible to increase the amount of data we can collect by orders of magnitude, the Zigbee protocol and mesh networks and low-power sensors do not, in themselves, do anything to produce “information” out of all this data.

Drowning in a Sea of Data
For those of you already drowning in a sea of data, the prospects of getting orders of magnitude more data must be truly daunting. As data analysis consultant Diana Bouchard notes, “For most of the industrial era, we had had too little data. Now we have more than we know what to do with.” Bouchard continues, “Once we sit down and figure out how to get the real information content out of the data, we should be able to run our plants a lot more effectively.”

This is the key, according to Stefano Angioletti, principal of SoftBrasil, a system integrator located in Sao Paulo. The most important improvement in data collection, he says, “is the development of statistical methods to collect and sample data to reflect the reality of process in all perspectives.”

“The big bucks lie,” says CONTROL columnist and industry consultant Greg McMillan, “in sustaining the benefits from process control improvement. This requires online performance indicators that show the trajectory and history of incremental benefits from increased process efficiency and capacity.”

David Lee, a process engineer at Millennium Chemical Co., concurs. “We collect data on the plant floor to enable manipulation of that data, both near real-time and historical, to assist in the operation of plant equipment. The ability to do this within a controller environment is, at best, limited.”

Angioletti says the most important reason to collect data on the plant floor is, “visibility to support the necessity of controlling. Asset management and good control of equipment and process is the same. Once again, the development of low cost solutions and more reliable systems make it feasible to implement many important management strategies.”

Lee points out, “Applications include statistical process and quality control, KPI (Key Performance Indicators –ed.) calculation, performance monitoring, and so on. In addition, the presentation of data to non shop floor personnel such as production management and engineering is becoming essential for both process optimization and troubleshooting purposes.”

In the accompanying supplement from the HART Communication Foundtion, A. J. Lambert, of BP’s PTA (Purified Terephthalic Acid) plant in Wando, S. C., notes that his plant is making up for reduced process control funding by using previously underutilized information from field instruments to improve maintenance response and uptime. “This BP plant is saving hundreds of thousands of dollars per year in just maintenance and production costs,” he says. That’s a big bottom line reason to collect more data and then use it.

How Data is Collected is Changing
Data collecting in process industries grew like Topsy, rather than being integrally designed. When many process plants were constructed, there was no concept of “data” coming from the process. In the 1970s, distributed control began with transmission of analog process variables in real-time to analog controllers and control stations. Trend and total were collected, if required, using analog recorders and totalizers.

The drive toward continuous process improvement awakened plant managements to the possibilities of using data other than setpoints and actual real-time values, which might be collected from the control system in digital form. Contrast this with the aerospace industry and research labs in many process industries, where the very concept of data collection was important from the beginning.

David Lee comments, “Good data collection allows ever-improving diagnostic and monitoring capabilities on the plant floor. With the trend towards greater plant reliability requirements to increase throughput by decreasing unexpected downtime, this is becoming extremely important. Predictive maintenance through continuous monitoring of key process and derived parameters is a key tool, as is incident investigation following a failure.” Maurice Wilkins, chairman of the World Batch Forum, and also at Millennium, says, “I agree with Lee. Predictive maintenance is a key benefit. For instance, we’ve used our data historian here for a while to measure valve movements in certain critical areas to ensure continued operation.”

Most process plants start out with a simple add-on to their control system: batch reports. Most modern SCADA systems have the capacity to output data, but their reports are often crude and cumbersome. “They are easy to configure and to maintain,” says Cliff Speedy, a consultant in SCADA and automation issues, “but they lack the flexibility to venture past simple ingredient lists.”

Rather than beef up the reporting functions of these software systems, some  manufacturers have standardized on a form of SQL (structured query language), which is used to pull data from the control system and push it into a database such as Oracle or Access, where the reporting and data manipulation tools are of a very high order.

Optimization vs. Loop Control

There has been a significant change in the thinking of many process plant automation engineers with the advent of the concept of process optimization, and its obvious follow-on, asset optimization.

Originally, process control focused on the basic control loop: sensor, controller, final control element. Now, as revealed in the sidebar, “After the Loop, What?” the metaphor of control loops may be changing to something else. “Perhaps a network of control inputs and feedback responses similar to a spider web,” suggests Mark Albert, sales manager of Logic Beach, a datalogger manufacturer. “When a fly enters the web there are dozens of feedback paths to the control point which can be anywhere on the web at any particular time.”

What is making this concept possible is the development of fast, digitally connected sensors and sensor networks, and the data analysis tools to go with them. Diana Bouchard notes, “The wide dissemination of integrated more-or-less-COTS (commercial, off the shelf) process information systems means that you no longer have to keep and feed a database programmer on site so you can collect and process your data.”

Wilkins points out another development. “Way before asset management became an ‘in-vogue’ term,” he says, “refineries were using predictive maintenance and continuous monitoring on compressors, and not without significant expense. The current technology, especially Fieldbus, allows smaller companies such as ours to have some of the same benefits.”

Tim Donaldson, marketing manager for Iconics Inc., agrees. “The two biggest things that come to mind are OPC and .NET technology. With OPC, this has enabled a standard protocol to be used for connecting software to plant devices, breaking the barriers of having to have point-to-point proprietary connections.” In regard to Microsoft’s .NET architecture, Donaldson points out that “This new platform lays the groundwork for applications to be built for bridging data and viewing data anywhere, anytime. data bridging can happen between production, process control, business systems, ERP, SCM, CRM and legacy systems. It also allows viewing from across the Web and with wireless devices.”

The Future of Data Collection
The cost of monitoring more parameters and collecting more data is dropping drastically. The cost of wiring dwarfs the sensor cost itself, and as wireless networks become simpler and more robust, the idea of a completely wireless plant control system may not be so far off. Millennial Net co-founder Sokwoo Rhee's i-Beans, for example, are hardware modules that can allow instrumentation-like gauges, sensors and actuators to communicate over self-organizing and self-healing wireless networks that can operate with an extremely low rate of power consumption on the energy of a 3-V coin-sized battery for years at a time. It is the network architecture, not the sensor, which really is the big news. Self-organizing and self-healing networks make it possible to implement wireless even in many critical function areas in the process. The U.S. Department of Energy (DOE) just let a contract to GE and a pair of wireless network developers to create mesh networks that are robust enough to be able to be used to monitor industrial motor efficiency in real time.

But Is It Information?
Data isn’t “information” until it is received by somebody who can use it to make decisions. Jonathan Pollet founder and CEO of PlantData Technologies notes, “Often we still find that data is being collected in inefficient methods, and some that have deployed high-end SCADA and DCS systems are still doing a lot of manual data collection in parallel with the data that the SCADA and DCS systems are already collecting. Being able to step back and understand what are the important pieces of information to collect, and which tools are best to use in the process of acquiring and normalizing the data requires the ability to understand industrial automation systems, work flow processes, and the capabilities of today’s modern data acquisition and normalization techniques. The industry standards built around Ethernet TCP/IP, OPC, and XML have allowed us to build a five-layer model that we call “STAND” for streamlining data collection from any field device using any communication method, and moving that data efficiently through to the end user, reporting tool, or web page. (see Figure)”

When designing information transfer systems, it is critical to be able to weight properly the value of each piece of information. Nowhere is this more evident than in Alarm Management. An operator who has to deal with 300 alarms per hour will simply turn off the alarms, rather than try to figure out what the pattern these alarms are building says. Not only do you need good information system design, but delivery and presentation of the information (human factors, see “Designing Control Rooms for Humans” CONTROL, July ’04, p47) is critical to the ability to use it.

The real issue, though, is how to produce product to the customer’s requirements. It isn’t about control parameters at all. We’ve progressed from analog loop monitoring to plant optimization and asset management on the strength of increasingly robust and less costly tools and strategies. Now it only remains to make it all pay.



“AFTER THE LOOP, WHAT?”

W

e are taught to think in terms of classical control loops … sensor, controller, control element, but recent developments in advanced process control seem to indicate that Mark Albert’s metaphor of a spider web may be more accurate than the conventional loop metaphor we’ve been accustomed to. Should we be thinking up a new metaphor for control other than loops? What would it be?

Greg McMillan
suggests, “’unit operation control.’ Ultimately what you want to do is improve the control of a unit operation such as a heat exchanger, evaporator, crystallizer, fermentor, reactor and distillation column, which are all unit operations.” This is the difference between the way a process engineer thinks and the way a control engineer does. “For a process engineer it is natural to think in terms of these unit operations but control engineers are focused on individual measurements and loops.”

Diana Bouchard says that there are already metaphors in use that we might adopt more widely. “One metaphor that is already in existence,” she says, “is feed-forward model control. This will become important because many processes are evolving towards faster throughput, minimal delay time in tanks, and faster response to upsets in the interest of faster throughput and meeting more demanding quality specifications. In these processes, you will not have time for a pure feedback loop to work. You are not going to be allowed to make 70 tons of bad paper before you find out you had a problem.”

Jonathan Pollet, founder and CEO of PlantData Technologies, says, “Instead of looking at process control one loop at a time, it would be better if we could call it Total Process Management. Operations is not really concerned about watching the condition of one particular PID loop, but more importantly, how all of the contributing process control loops have an effect on the overall process. Logging data out of the control system into database systems allows the control system data to be aggregated with data from other sources, like inventory levels, raw material usage, maintenance systems, and more. When operations teams can set the “Big Picture” then we find that companies are able to really make strides in removing wasted materials and time out of their processes. We need to remove the notion of silo applications like SCADA, DCS, PLC, Maintenance Systems, Inventory Systems, Sales/Marketing Systems, and come up with a better way to integrate all of these into an intelligent Process Management System.”

Share Print Reprints Permissions

What are your comments?

Join the discussion today. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments