Advances in IT Improve Process Automation

Virtualization, Thin Clients and Other Technologies Borrowed From IT Data Centers Are the Next Big Thing in Process Automation

By Dan Hebert

Share Print Related RSS
Page 1 of 4 « Prev 1 | 2 | 3 | 4 View on one page

Process plants and companies adopt IT trends much more cautiously than their commercial counterparts, since safety and availability must always take priority over cost and space savings. But when new IT trends such as virtualization and thin clients promise to increase safety and availability while lowering costs, process industry firms are compelled to take a closer look.

Virtualization and thin clients were initially adopted for data centers and other commercial IT applications to save money and space. Virtualization allows multiple operating systems and associated applications to run simultaneously on one PC, so fewer PCs are required, yielding obvious savings both in up-front cost and operating costs (Figure 1), and also in reducing space requirements.

For a data center with hundreds of server-level PCs, the lower costs and space savings from virtualization can be very compelling. But for process plants, it's just not worth it to sacrifice reliability for cost savings, particularly as a typical plant has just a few PCs running server-level applications such as HMIs, historians and I/O servers.

A typical end user expresses his doubts about the benefits of virtualization. "Our process control world is many orders of magnitude smaller, more stable and more contained than the IT world," says John Rezabek, process control specialist at Ashland, a specialty chemical company. "I would most likely seek a gradual cutover for our HMIs, so we can compare reliability and performance side-by-side with one-off HMI workstations. The physical and functional redundancy of multiple independent HMIs is arguably worth what we pay for the relatively stable and reliable hardware, assuming licensing costs are roughly equal."

So the burden of proof is on the technology providers, as they must show often skeptical process industry firms that virtualization can do more than save a few bucks by eliminating a couple of PCs.

Is Virtualization More Reliable?

It's certainly counterintuitive that putting more operating systems and applications onto a single PC instead of the old one operating system/PC model could increase reliability, but that is, in fact, the case when virtualization is applied correctly.

"For disaster recovery, since we are not using auto-provisioning (which would provide additional benefits in this area), the improvements are simply this: If we have a proper backup and a server crash, using virtual servers, we can use any brand/version of server upon which to restore the image," explains John Dage, PE, the technical specialist for process controls at DTE Energy, an electric utility.

Without virtualization, a server crash could be much more problematic, as it would require the replacement of the server with a new PC. The new PC would need to have the same operating system as the older model, which could be a major issue depending on the vintage of the older operating system. Even if the version of the operating system wasn't an issue, the configuration of the operating system could be, as server-level applications generally require custom configuration of the operating system.

"Traditional non-virtual ghost images required lots of hands-on work to deal with incompatibility of firmware and peripherals. Virtualization takes that out of the picture," notes Dage.

Simply put, an operating system and its associated application can be easily mirrored on a second PC in a virtualized environment, and this mirrored instance can be brought online immediately. Even without real-time mirroring, recovery from a PC failure is much quicker with virtualization.

Mallinckrodt LLC is the pharmaceuticals business unit of Covidien, a global healthcare products company. Mallinckrodt is the world's largest supplier of both controlled substance pain medication and acetaminophen. "When a PC fails, the average time to rebuild a PC has been eight to 10 hours, but the worst case with virtualization is 30 minutes," notes Tom Oberbeck, a senior electrical engineer at Mallinckrodt.

So reliability is increased not by reducing potential PC failures, but by making recovery from a failure much quicker and simpler. Users can afford to purchase more expensive and more reliable server-class PCs with features such as redundant power supplies and hard disks because there are fewer PCs to purchase, further increasing reliability.

Page 1 of 4 « Prev 1 | 2 | 3 | 4 View on one page
Share Print Reprints Permissions

What are your comments?

You cannot post comments until you have logged in. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments