Defining ethernet determinism depends on the application

Nov. 22, 2016
With today’s network technology, even Ethernet and wireless are almost always fast enough.

As automation professionals, one issue we have about control loops is ensuring we're able to support real-time control. Historically, when Ethernet was 10 MB/s and there were multiple drops on a single port, collisions were a significant concern and impediment to its adoption because we couldn't guarantee delivery of every message, every time, at a repeatable frequency. Ethernet wasn't “real time” enough, and hence not deterministic, or so we believed. Because we believed original Ethernet was not deterministic, we waited until we got faster switched networks that almost eliminate the chance of a message not getting where it should be when it should. We still lose packets, however, we can recover quickly enough to satisfy our definitions of determinism and real time.

In fact, what we're really doing is confirming that the definition of determinism depends on the application. In factory automation or robotics, response times often need to be in milliseconds for each discrete part or movement, while continuous processes, being essentially analog, are scanned at a high enough frequency to allow us to model the system, with “high enough” frequency generally being accepted as six times (6x) the process frequency/response time (process time constant plus process delay). Many practitioners often use a ‘rule of thumb’ of 10x the process, though I suspect this is simply to provide a margin of error and because it's easier to move the decimal point than divide by 6.

Another underlying assumption in conventional PID is that control is executed on a periodic basis, which implies a regular scan and update rate. Fortunately, the scan rate for continuous processes, where flow is likely the fastest changing loop, is normally seconds long.

Control systems and their networks are complicated enough to design and build without having to calculate the definition of determinism for every loop, and then design hardware to match. So instead, we configure our systems to scan the I/O at one or perhaps a few different scan rates. The selected scan rate(s) are based on the applications in the facility. This is one reason why the scan rates for PLCs are in milliseconds (as required by factory applications from which they evolved) while a DCS, which scans many more points per cycle, can have scan rates of seconds. A continuous process doesn't change that significantly that quickly, and if it does, a different system, such as an SIS, provides the necessary extra protection.

Wireless sensor networks (WSN), on the other hand, have typical update rates of 15 seconds or longer (updating only when the process has changed outside the prescribed “window,” resulting in a non-periodic basis to preserve battery life), making them far from deterministic. Another complicating factor with WSN applications is that since they're mesh systems, the signal itself is retransmitted multiple times, increasing the risk that an individual update can be lost, so the control system and algorithm must also be able to handle a loss of communications.

If this sounds similar to some of the challenges associated with legacy 10 MB/s Ethernet, where updates can be affected by a collision or one of the nodes malfunctioning, perhaps our systems aren’t, nor do they need to be, as deterministic as we believe. As long as we have reliable communications with the WSN access point, the control system can easily be made to believe that updates are as regular as necessary to be viewed as deterministic.

Terry Blevins, Mark Nixon, and Marty Zielinski published an interesting paper, “Using Wireless Measurement in Control Applications,” that describes one approach to modifying the PID algorithm, and in particular the reset (integral) component of the algorithm, for irregular signal updates while still retaining good control. Other manufacturers are taking different approaches, and if your system does not have a specific solution, with the processing ability of today’s control systems, they're able to create simple process models to fill in the gaps between the updates confirming the actual conditions (much like we've been doing with manually analyzed process samples for many years in more complex applications).

In the end, as demonstrated above, everyone’s definition of real time and hence determinism depends on the application. Or perhaps we can make the argument that determinism no longer has the same clout as it did when things were slower.

[sidebar id =1]