By now, many process control practitioners are familiar with virtual machines and their advantages when it comes to the deployment and management of process automation applications. In essence, a virtual machine uses a piece of software called a hypervisor to abstract the application and its operating system from the details of the underlying computer hardware. Multiple application images, or guests, together with their requisite operating systemsāeven legacy ones no longer supported by their creatorācan run on a single host server or PC, often substantially reducing hardware footprint while simultaneously extending an applicationās useful life.
But it turns out that virtual machines (VM) were just another step in the continued convergence of operational technologies with those of the rapidly advancing IT world. Containerization and orchestration, next-generation virtualization concepts created to streamline the management of applications on cloud datacenters, are now inspiring new automation architectures that promise to ultimately liberate end-users from the painful and disruptive practice of wholesale digital control system migrations that have plagued the process industries for the past 30 years.
Containers arenāt VMs
Containers, or containerization, in many ways represents a more granular, lower-overhead approach to virtualization than the virtual machine. Multiple, lightweight containersāeach of which includes application code packaged up with all of its dependencies, including runtime, system tools, system libraries and settingsāplug into a container engine that sits atop the operating system infrastructure (Figure 1). In the Linux-based, open-source community, Docker is the most widely known container approach although a version of Docker for Microsoft Windows has also been developed.
Figure 1. Containers represent a more granular, lower overhead approach to virtualization than traditional virtual machines.Ā
āEverything that an application needs to execute is wrapped up in its container,ā explains Andrew Kling, senior director, system architecture and cybersecurity, Schneider Electric. āThis creates a single, contained environment that's different than the container next to it.ā
From a systems perspective, each of these āmachinesā functions individually, and can be connected through a network. āContainers are independent and, like standalone machines, can be selected and deployed as needed to a particular platform, giving you a great deal of flexibility,ā Kling says. āAnd, as individual, closed containers, they also give you great security.ā
Ralf Jeske, ABB product manager for control systems and field device engineering, likens software containers to the familiar intermodal shipping containers that are the backbone of global commerce. āThese containers are characterized by standardized dimensions and hooks,ā he explains. āThey can contain anything and be transported and moved with compatible equipment, and there's a unique ID and an inventory list describing the content of the container.ā
Airlines, too, have developed specialized containers and unit load devices for fast loading and unloading of airplane cargo. The container details may vary by airplane model (target operating system), but like software containers, they peacefully coexist and keep to themselvesāregardless of their respective contents.
Similarly, the process automation industry has its own special needs when it comes to software containers, Jeske says. āThey need to be able to contain a wide range of applicationsāsuch as for advanced process control, process optimization and asset managementāand to exchange data via a standard interface such as OPC UA.ā
Further, process automation containers need to be movable among locations and hosting computer hardware, Jeske says. And for larger operations, an orchestration tool such as Kubernetes (created by Google then released to the open source community) can be used to maintain, organize and manage oneās inventory of containers, automatically rebalancing computing loads according to resource availability.
Orchestration and partitioning
This ability of orchestration tools such as Kubernetes to actively work to restore whatās called a declarative distributed system configuration among containers allows improved system resilience, according to Harry Forbes, research director with the ARC Advisory Group. āAnother important advantage of orchestration tools is the ability to readily roll back changes,ā Forbes says. āIf you make a mistake, reverting to an earlier version of the container is fast and straightforward.ā
Strong partitioning among containers is another important differentiator relative to virtual machines, and in fact inspired their development and naming.
āMost system administrators or UNIX application developers are familiar with the concept of 'dependency hell'āmaking available all of the system resources in order for an application to run, and then coordinating the same as different applications are updated on a server machine,ā says Tim Winter, chief technology officer for Machfu, a developer of Industrial IoT connectivity solutions.
āIt can often be a tricky and tedious exercise to maintain multiple application dependencies across all applications that are provisioned to run on the same server,ā Winter explains. āContainers allow each application to bundle a controlled set of dependencies with the application, so that these applications can independently have stable execution environments, partitioned and isolated from other containerized applications on the same server. Even application updates are often packaged and deployed as container updates for convenience. Thus, containers provide strong partitioning between application components on a target machine.ā
Dennis Brandl, chief consultant with BR&L Consulting, adds that, āMany companies found that the cost and effort involved with implementing a physical machine was much higher than with a virtual one.ā However, traditional VMs carried with them their own costs. āSure, you may have one server running multiple applications, but youāre also running multiple virtualized operating systems as well as a hypervisor, which is itself a miniaturized operating system,ā adds Brandl. āBloat is a big issue, plus all of that software still has to be updated and managed.ā
Hardware and software decoupled
Even as VMs made their namesake splash at the HMI and application server level, virtualization was starting to creep into other areas of process automation as well. The move to software configurable I/O, for example, allowed Honeywell Process Solutions to begin considering the broader possibilities afforded when software and hardware development are decoupledānot just at the server level, but for controllers and I/O as well, according to Jason Urso, vice president and chief technology officer.
āAs we continue to deploy these newer technologies, the better we understand their power and capabilitiesāand it leads to other new ideas,ā Urso says. One such evolution in Honeywellās offering is the Experion LCN (ELCN) that effectively emulates the companyās aging TDC 3000 system as software, and promises āinfinite longevityā of its customersā intellectual property investments.
āItās 100% binary compatible and interoperable with the old system,ā said David Patin distinguished engineering associate ā control systems, ExxonMobil Research & Engineering, at the Honeywell Users Group Symposium in June 2018, when the ELCN was unveiled to the public. āCurrent TDC code runs unmodified in this virtual environment, greatly reducing the technical risks. Intellectual property such as application code, databases and displays are preserved.ā
Virtualization of the TDC environment has come with some added benefits, including the ability to use Honeywellās cloud-based Open Virtual Engineering Platform to engineer TDC solutions; lower cost, smaller footprint training simulators; peer-to-peer integration of virtualized HPM controller nodes with current-generation C300/ACE nodes; and integration with ControlEdge and Unit Operations Controllers.
For its part, Honeywell has aggressively pursued its virtualization vision since then, launching at the 2019 user group meeting its Experion PKS HIVE, for Highly Integrated Virtual Environment. In short, the solution features virtualization and the decoupling of hardware/software dependencies at the application, controller and I/O levels.
Another container instantiation making inroads in the process industry is the Module Type Package (MTP). MTPs are in essence containers created to ease the integration and automation of modular process plants via pre-automated modular units that can easily be added, arranged and adjusted according to production needs.
Consistent with a standardized methodology and framework, each MTP includes all necessary information to integrate itself into a modular plant, such as the communication services, a human machine interface (HMI) description and maintenance information.
In ABBās MTP offering, for example, the companyās ABB Ability System 800xA operates the process and orchestrates the intelligent modules. An open-architecture backbone links the orchestration layer to the module layer with communication via OPC UA.
Modular enabled automation reduces cost, risk and schedule by eliminating non-standard interfaces; integrating an intelligent module into the orchestration system takes hours instead of days for a traditional package unit and skid integration.
āAll type of projects today, from small to large scale, use more and more prefabricated skids and modules,ā says ABBās Jeske. āThe ability to maintain a process automation container allows for more reuse as well as cheaper and faster plant build-up. Over time, packagesāincluding some highly sophisticated packages and modulesāmay be leased or paid for by use.
āAfter a pilot phase, businesses can now start applying MTP in plants for expansions,ā Jeske says. āAnd plans for new, āpurely modularā production lines are being made as we speak.ā
Win-win for developers, users
At an industry level, process automation practice is moving along a path already traversed in adjacent industries, such as finance, telecommunications and healthcare, notes Schneider Electricās Kling. āIn an industrial control system within the process automation space, we're now picking up on that, taking their successes and applying them to our industry.ā
āToday, we donāt always realize when we're using containers, like when we're interacting with cloud-based services,ā Kling adds. āAnd applications that rely on cloud services may already be taking advantage of containers. Where we are going, those containers will move closer and closer to the edge, on-premise or to the cloud, or to embedded devices that live on the asset as well.
āWe have seen these containers used in product-registration platforms and in historian-, engineering- or analysis-in-the-cloud applications. These systems have long been supported by companies like Microsoft, Red Hat and others because they're native to their cloud-based platforms.ā
Project execution, in particular, has benefitted tremendously from container technology in that automation contractors can now build out entire virtualized control systems, test and verify them against virtualized models of the process, and only then downloaded to local hardware.
āThat's compelling for us because in a big project doing this in the traditional way could take 18 months,ā says Honeywellās Urso. āWe used to have armies of people working on physical boxes at some remote place. Now, we can have our people working on the project that's hosted in a cloud datacenter, in a set of virtual machines. And when it's complete, we copy the virtual machine to the physical equipment.ā
āYou'll never have to hit Setup.exe again!ā exclaims Kling, who clearly has spent some time lugging DVDs (perhaps CDs and maybe floppies, too?) from operator station to operator station. āYou're delivering a container with the application set up already, so that entire concept will disappear. Containerized software will also offer new levels of flexibility because you can build a repository of these templated, āready to goā applications and deploy them as needed.ā
And because the container is outside the actual operating system, the OS can be upgraded without having a significant effect on the applications. āWe're starting to see the different layers being separated,ā says Kling. āThe software and hardware that were built specifically to work together are becoming more independent, ultimately allowing the hardware, operating system and applications to all evolve independently.ā
Upgrades and replacements will also be much less painful, notes BR&L's Brandl. āWe may not be talking about a āno shutdownā replacement. However, we are talking about a significantly faster changeover, and thatās a good thing from an end-user perspective.ā
ARC's Forbes adds: āSoftware containers provide two major values to software developers and end users. First, an automated means to deploy and manage multiple distributed applications across any number of machines, physical or virtual. Second, a container software development process that creates a repository of ācontainer imagesāāsoftware deliverables that can be created collaboratively, and include the artifacts required for running an application within a specific machine environment.ā
Container development, deployment and orchestration software tools have matured phenomenally during the last five to 10 years, Forbes continues. They now far surpass traditional embedded system software technology in their capability to deliver and manage distributed and high availability applicationsāsuch as the automation applications of tomorrowās distributed control systems. āThis is why the effective use of container deployment and orchestration software is likely to be a critical success factor for future process automation system,ā Forbes says.
Do containers = open?
Figure 2. Among the first process automation-specific containers, MTPs are intended to ease the integration of modular process plants
With the notable emergence of leading open-source, Linux-based container and orchestration options, itās fair to ask whether such technologies will advance the cause of the Open Process Automation Forum (OPAF) to identify a path toward interoperable, plug-and-play process control.
The short answer? It depends.
āThe use of containers allows the abstraction of applications from a hardware or the execution of the same application in different hardware depending on the particulars of the installation,ā notes Luis Duran, ABB global product line manager, safety. āIf the interfaces are properly defined, containerization could enable the portability of applications or interoperability of different technologiesāpotentially even technologies from different suppliers.
āContainerization also allows a manufacturer to protect intellectual property and domain knowledge as well as maintain productivity even during technology upgrades,ā Duran continues. āAnd since hardware is abstracted from application, one can envision scenarios in which the application is transferred to a more robust platform with minimum downtime.ā
Brandl notes that while there are real-time versions of Linux available, there currently are no āreal-time,ā open-source containerization/orchestration schemes suitable for deterministic control tasks. āIt probably wonāt be led by one of the majors, but it would be great if some of the second- and third-tier automation companies came together to create a real-time Dockers and Kubernetes implementation.ā
Schneider's Kling predicts that the move to a container-based realization of the OPA vision will begin with standards such as IEC 61499, which supports application-centric automation design independent of the underlying hardware devices.
āIEC 61499 features an event-driven model built around function blocks, which solves the problem of ensuring portability, configurability and interoperability across vendors and, at the same time, software and hardware independence,ā explains Kling. āThis standard allows us to develop containers independently that will function across platforms. Companies will continue to build packages that best suit their offers, but the software will become increasingly interoperable because of standards like IEC 61499.ā
So, while containerization and orchestration technologies can advance interoperability and openness, itās also clear that virtualization strategies can be leveraged and advanced in a proprietary fashion, too. ARCās Forbes recognizes this orthogonality, but believes that containerization and orchestration need to be part of industryās approach if it is to liberate its intellectual property from ācontrol languages and idioms that often are not machine readable and are always highly proprietary.
āStandardized container and orchestration tools offer an exit from this dead end.ā