paula_doyle

Get going with Gigabit

March 24, 2006
Synchronous protocols based on IEEE1588 deliver workable determinism, but are they necessary given faster Ethernets now available? Paula Doyle, Doctoral researcher, explains their significance.
By Paula Doyle, University of Limerick, U.K.

SINCE ITS standardization, Ethernet's base rate has increased from 10 Mbps to 100 Mbps in 1995, to 1,000 Mbps (1 Gbps) in 1998 and to 10 Gbps in 2002. So what is the significance for latency, response time, and determinism in the network?

The latency of a network is the time it takes for a packet (or a byte) to travel from source to destination. Latency is cumulative, and the total latency from source to destination does not depend solely on base rate.

Determinism describes a system in which time evolution can be predicted exactly. Determinism, like real time, is not necessarily related to base rate, and a 1 kbps system could be more deterministic than a 1 Gbps system. 

At wire level, there are two distinct delays: transmission and propagation. The propagation time is dependent on both the distance and the speed of the signal through the particular medium. For 10/100/1000BaseT, the speed is equal (~2x108 m/s), as they all employ copper. For these technologies, base rate has no bearing on propagation delay. Conversely, the transmission time directly relates to base rate. Typical Ethernet-based office installations employ half-duplex, shared-medium access. These nodes implement the MAC-level CSMA/CD with an exponential back-off algorithm to determine medium access. This algorithm introduces random delays along with the potential for transmission failure, making it unsuitable for real-time applications requiring determinism. Along with the unsuitable medium access scheme, half-duplex communication over Gigabit Ethernet (GbE) does not offer the expected 10x speed benefits across the board. 

The greater base rate reduces the maximum segment length to 20-25 m--too short for most applications. Fiber solutions support much larger segments. 

The desire to retain the maximum segment length of 100 m led to the carrier extension, which gives an increased minimum slot size of 512 bytes. Comparisons with a 64-byte frame on all three base rates shows transmission times of 51.2 µs for 10 Mbps, 5.12 µs for 100 Mbps and 4.096 µs for 1 Gbps, a transmission time reduction of 20% when migrating from 100 Mbps to 1 Gbps. Packets of equal to or greater size than the 512-byte minimum will enjoy the expected 10x transmission time. 

Switched Ethernet with full-duplex links, where each node exists in a unique collision domain, is the preferred solution for real-time Ethernet. Unlike shared Ethernet, the CSMA/CD medium access scheme is not used: switched Ethernet can support shorter frames without fear of undetectable collisions. When full duplex is used with gigabit Ethernet, transmission times of all frames increase by a factor of 10, providing an improved transmission time of 0.512 µs for a 64-byte frame. 

The additional intermediate switches along the transmission path add to the overall latency. Switches incur transit and buffering delays, the values of which are switch-dependent. Transit delays can be minimized with non-blocking, wire-speed switches, which provide each port with the capacity to operate simultaneously at wire speed. The choice of switch architecture also affects transit delays with store-and-forward solutions incurring longer delays than cut-through solutions, due to the extra processing requirements. 

The buffering delay increases with network congestion. Well-designed RT-systems should aim to limit congestion and hence minimize this delay. The number of switches between hard real-time nodes also should be minimized. 

Prioritizing traffic in terms of its real-time demands can help achieve better determinism. The priority standard for Ethernet, IEEE 802.1p, is implemented within the VLAN standard IEEE 802.1Q. IEEE 802.1p offers eight levels of priority for switches to fast track frames through the switch. With correct configuration, switches can be deterministic and operate at wire speed to give minimal latency increase, while eliminating the need for CSMA/CD and the extension of the minimum slot size. 

In the case of Ethernet systems, the greatest part of the cumulative latency is in the nodes. Hardware latency in microseconds is no match for software latency, which can be tens of milliseconds in a system node. Operating systems, protocol stacks, and device drivers all add to the processing time, and even finely tuned software applications will be a far bigger contributor to latency than hardware transmission and propagation times. Similar to the processor/memory bottleneck of the CPU, Ethernet systems developers need to concentrate on the bottleneck at the node and reduce the delay before a frame is presented at the wire. This would give the greatest improvement to system performance, and is probably the greatest cause for seemingly poor performance when migrating to GbE networks.

  About the Author
Paula Doyleis a doctoral researcher at the Circuits and Systems Research Centre, University of Limerick, Ireland. Contact her at[email protected]This article originally appeared in the U.K.’s"Industrial Ethernet Handbook" during 2005.