This article was printed in CONTROL's June 2009 edition.
By Paul Miller
Perhaps because distributed software objects play well in a common network and technology environment, their growing role in process automation seems to follow a logical evolution.
In the beginning, there were simple data tags, such as pressure, level, flow or temperature process variables. With tag data, dedicated connections need to be created to get the data from the source, typically a PLC or process controller, to the consumer, typically an HMI, alarming, trending or historian application. Then, as object-oriented programming (OOP) developed, tags evolved into objects. Objects feature multiple attributes, including both behavior and state, in addition to the process variable data. By definition, objects also display inheritance characteristics that allow users to replicate and modify them for re-use. Standards such as Microsoft’s Distributed Component Object Model (DCOM) enabled connections between different objects in the same computer or computing platform.
Now, templates, services and the architectures they fit into are emerging and taking on and extending some of the capabilities first explored by OOP.
Templates Simplify Engineering for Power
“The concept of templates helped me out quite a bit,” says Hal Allen, SCADA analyst at Santee Cooper Power (www.santeecooper.com) in Monks Corner, S.C. Allen recently implemented an InFusion SCADA system from Invensys Process Systems (www.ips.invensys.com) to monitor and manage 56 electrical substations for the state-owned utility (Figure 1). “When you get an InFusion system, you have many devices created from pre-defined analog and digital objects provided with the system. A template can be derived from another template, or an object can be derived from a template, and then you can go back and make wholesale changes to everything that has been derived from that.
“In the initial phase of the project, I created my devices from the generic discrete and analog device templates, so that now I have a hierarchical view of how the devices, breakers for example, work in the field. Once you have all these devices built as a template, then you can easily create your real-world objects. If you’ve created a couple of hundred devices and then realize you left something out or need to change something, then you can go back and make a change in one place and have all the hundreds of objects created from that template updated automatically.
“A good example of a device that I created from the analog device object is a transformer, which is an integral part of a substation. We have transformers at different voltage levels, and some are older than others, but they all have some common attributes. My philosophy was to create a base transformer object template having common transformer attributes, such as a fan or a megawatt value. Then I just modify the template as needed to represent the actual transformer.”
SOAs Enable Cross-Platform Communication
Distributed object technology simplifies data access in a network, so it’s used in many advanced industrial automation systems. It’s based on vendor-developed standards, such as DCOM and the Object Management Group’s Common Object Request Broker Architecture (CORBA) standard for UNIX platforms. The OPC Foundation’s (www.opcfoundation.org) Data Access (DA) is one popular example. OPC DA servers and clients, based on Microsoft software, provide data connectivity between different vendors’ intelligent devices, such as PLCs, and software applications, such as HMIs, alarm packages, trending packages, historians, etc.
Distributed object technology represented a big leap forward for the industry, but neither DCOM nor CORBA functions well in a cross-platform environment or through network firewalls. To overcome such cross-platform interoperability issues, the IT world developed the concept of standard “services,” including web services. These services are now making their way into the industrial world where they are helping to break down artificial functional barriers to enable software to map to actual work processes more closely.
“The idea with services is that if you define a service that represents an interface between two systems, then the manifestation of the plant itself and the interface are independent of each other,” says Mike Brooks, staff technologist in global manufacturing at Chevron Refining (www.chevron.com) in San Ramon, Calif. “There probably will still be objects within the scheme, but instead of communicating from within DCOM or CORBA, where the objects are instantiated from the hub, now they’re decoupled with a service.”
Dave Hardin, a system architect at Invensys, explains that, “Services define the interface and communication mechanisms between objects and other software components. These interfaces and mechanisms become the communication contract to which each system must adhere. Some services, web services for example, support widely used protocols, platforms and systems. Other services support specific communication functionality, such as fast data transfer, strong security, failure resilience or event messaging. The contract forms the basis for interoperability and helps decouple the functions performed by the service from the technology used to implement the service.
“SOA means different things to different people. However, the industry is converging on the concept of service ‘contracts’ for interactions between systems. These service contracts provide the core of an SOA. They provide a loose coupling between applications and objects to minimize the effect of changes within objects.”
Chevron Bases IT on Services
Brooks explains that, “I came to Chevron three years ago to look at what we needed to do strategically for our IT infrastructure for our refineries worldwide. The key space that we want to fill is the space between the business systems and the DCS systems. I call that space the production management space, although some in the industry call it MES (Figure 2).
“We have a whole smorgasbord of different applications that fill that space. They include maintenance systems, historians, reliability systems, lab systems, the DCS, planning systems, compliance systems and others, which all came to be owned by various departments in the refinery. Some of these applications were very good, and they worked well for 10 to15 years or so. The problem is that they are owned by different departments, and the overarching business processes don’t just belong to one department.
“The example I often give is that of a guy walking around the site. This guy, an outside operator, hears noises coming from a piece of equipment. Could be a compressor. He looks at the instruments, and they indicate an impending failure. What happens then is that the maintenance department wants to take the compressor out before there’s a big failure, but the operations department wants to keep it running because they’re in the middle of a key production run. So what’s the right thing to do? There’s no perfect answer, and the way it get answered right now is usually in the morning meeting, probably by the guy that shouts loudest.
“What we see with so many initiatives and technologies—real-time, lean, virtualization, etc.—is they’re all artifacts of something higher, and that’s what we’re trying to concentrate on at Chevron. The thing that’s higher is the work process—something that stems from the whole business process and the business-process modeling, which then drops down into the tasks that need to be done—the interactions between people. So as we build our next generation of IT infrastructure in this space, we’re looking to use work processes as the key driver, so that whenever anyone looks at a screen display or a report, it’s because it’s part of a business process.”
Taking Care of Business Too
Brooks adds that Chevron wants to manage its business process so that all information required for collaboration and decision-making at any point in time is automatically inserted into that business process. “We believe that using services is the appropriate way to get that done,” he says. “The big opportunity here is to make sure that we can decouple the business process from the IT underpinnings. An enterprise service bus can give you a tidier way to do point-to-point integration, but it’s still point-to-point integration. We’re looking to go higher than that by using services.
“You have to consider a services layer through which the people who really know the business can build a process workflow independent of the IT underpinnings. We have to do this for eight refineries, so we want common processes that can be shared, even if the implementation of the IT underpinnings is a little different in each one. We want to be able to put the processes in the same way and then, when it’s dropped inside a system, there it would automatically know how to connect to the external systems.”
Paul Miller is a former contributing editor for Control. He now works for ARC Advisory Group.