Breaking the laws of science
Intelligent header compression increases Ethernet packet throughput over the air to reach capacities beyond 1 Gb/s. This breakthrough technology breaks the physical laws of science so it finally makes sense to compare air throughput numbers from different microwave system vendors. Until recently, there hasn’t been a lot of point in comparing the air throughput capacities of different microwave systems. The amount of information that fits into a radio frequency (RF) channel is governed by the physical and mathematical laws of science. Only very small variations in usable throughput — the number of bits per second transmitted into the air — have been possible. These variations were typically achieved through modulation efficiencies and by reducing the amount of radio overhead transmitted. Any significant differences in the air throughput data provided by 2 microwave vendors likely meant they were presenting Ethernet throughput differently; one providing Layer 1 throughput rates, and the other providing Layer 2 throughput rates. Now, the intelligent header compression technology used in some microwave packet radios is breaking the laws of science put forth by Harry Nyquist and Claude Shannon to push more bits per second into the air.
Nyquist: Calculating symbol rate and bit rate
In 1928, Harry Nyquist published a paper called Certain Topics in Telegraph Transmission Theory. In the paper, he showed that:
- Up to 2B independent pulse samples could be sent through a system with bandwidth B.
- Up to B independent Quadrature Amplitude Modulation (QAM) pulses could be sent through a system with bandwidth B.
What does Nyquist’s sampling theorem mean when applied to microwave? Calculating symbol rate First, it is important to remember that the available bandwidth B cannot be arbitrarily chosen. Bandwidth is allocated by international regulatory bodies such as the European Telecommunications Standards Institute (ETSI) and the Federal Communications Commission (FCC) in the United States:
- In ESTI-governed countries, the bandwidth allocated for point-to-point microwave transmissions can be 7, 14, 27.5/28, 56 or 112 MHz.
- In the United States, microwave bandwidth is allocated in multiples of 10 MHz, with 30 MHz and 50 MHz being the 2 most popular channel spacing frequencies.
If B=28 MHz channel spacing, it is possible to transmit up to B=28 10e6 QAM symbols per second in an ideal world. In reality, there is always a slight degradation compared to the ideal number, so that: B = Symbol x (1+alpha) Alpha is often referred as roll-off. It takes into account all of the implementation-related impairments that reduce throughput. Today, typical roll-off values are in the range of 0.10 to 0.15. This means that with 28 MHz channel spacing and an alpha value of 0.12, it is possible to transmit around 25 MHz QAM symbol. The next question is how many bits are transmitted with a QAM symbol? Calculating bit rate The answer is pretty easy; it depends on the number of QAM points. For example:
- With 4-QAM, 2 bits per symbol are transmitted.
- With 256-QAM, 8 bits per symbol are transmitted.
The general rule is that for an M-QAM system, LOG2M bits per symbol can be transported. The 28 MHz channel spacing example above indicated that 25 MHz symbol can be transported. With 256-QAM, that results in 25 x 8 = 200 Mb/s throughput. The only throughput variable is the roll-off factor, but the effects are almost negligible. Using the lowest practical value — 0.10 instead of 0.12 as used in the example — the throughput would be 201.6 Mb/s instead of 200 Mb/s. Now, the question is how much of the throughput over the air is user information and how much is overhead? Answering this question requires looking at the theory of error correction codes to determine the ultimate performance limit of communications systems.
Shannon: Improving error correction
In 1948, Claude Shannon published a theorem to describe the maximum possible efficiency of error-correcting methods compared to levels of noise interference and data corruption. Known as Shannon’s law, the theorem forms the foundation for the modern field of information theory. It states that given a noisy channel with channel capacity C and information transmitted at rate R, then codes must exist that allow the probability of errors at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information almost error-free at any rate below a limiting rate, C. Researchers have made many attempts to identify the best way to achieve — or at least get close to — Shannon’s limit; that is, to reach Shannon’s capacity with an arbitrary low-error probability. Reed-Solomon codes, MLC codes and Viterbi codes are examples of error correction codes that have been widely adopted in the microwave industry. In 1993, a major step toward achieving Shannon’s limit was reached when Claude Berrou and Alain Glavieux introduced turbo-codes in their paper Near Shannon Limit Error-correcting Coding and Decoding: Turbo-codes. A further, and most probably ultimate, step was achieved with the low-density parity-check (LDPC) code. LDPC is the most recent code introduced by the error correction code community, and it performs very close to Shannon’s limit.
LDPC: Error correction at its best The LDPC code was first conceived in 1960 by Robert Gallager in his Massachusetts Institute of Technology (MIT) thesis, however, implementing it was not practical at the time. In 1996, David MacKay of Cambridge University and Radford Neal of the University of Toronto developed the first practical LDPC implementation. Today, LDPC remains the most efficient error correction code available.
To achieve Shannon’s limit, an error correction code must be used. However, by definition, an error correction code always introduces overhead that protects the user information from noise. The higher the overhead, the higher the protection from noise, but the lower the throughput. The microwave industry has adopted codes with overhead in the range of 5% to 15%. While codes such as LDPC are more efficient than others, the impact on overall throughput must always be considered. With error correction, the previous 200 Mb/s becomes 200x0.9=180 Mb/s in terms of net capacity — the bits sent by users before they are affected by the code.
Ethernet changed everything
Once Ethernet came along, comparing microwave system capacities became much more complicated. With Time Division Multiplexing (TDM) technology, comparing the air throughput capacities of 2 different radio systems was easy. It was simply a matter of asking how many E1s, DS1s and STM1s are transported? With Ethernet, the question became “what is the Ethernet throughput?” The answer depends on the packet size and whether throughput is calculated at Layer 1 or Layer 2. Layer 2 throughput In an Ethernet system, the maximum throughput is equal to the data rate — 100 Mb/s, for example. However, this throughput cannot be achieved due to the frame size. Smaller frames have a lower effective throughput than larger frames. This is because the preamble and the Interframe Gap (IFG) bytes defined in the Institute of Electrical and Electronics Engineers (IEEE) Ethernet standard are added. These bytes do not count as data throughput.
Table 1 lists the maximum achievable throughput in a 100 Mb/s system for various frame sizes. In radio systems with integrated Layer 2 switching, preamble and IFG data is stripped from the incoming data stream and not transmitted over the radio link. At the far-end network interface, the radio equipment reinserts these bytes into the data stream. Layer 1 throughput At Layer 1, preamble and IFG bytes are included in the throughput — even if they are stripped — so it appears that the Ethernet capacity is higher than it actually is. The relationship between Layer 2 and Layer 1 is defined in the following equation: Layer 2 Throughput = Frame length / (Frame Length + Preamble + IFG) x Layer 1 Throughput For a typical transmission of 256-QAM at 56 MHz that means: Layer 2 Throughput = 64 / (64+8+12) x 446.8 = 0.762 x 446.8 = 340 Mb/s
Header compression is key for LTE
As service providers transition to LTE, choosing microwave backhaul systems that can squeeze more bits per second into the air becomes crucial. LTE will increasingly use IPv6 IP addresses that occupy an additional 32 bytes of header capacity. This additional overhead must be encapsulated in the Ethernet payload, reducing efficiency when short-length, multi-protocol packets are transported. Intelligent header compression reduces protocol overhead. The header size that is compressed is constant while the packet payload is variable. The greater the compression, the greater the payload capacity. Header compression is most beneficial when the network is transporting small packets and when the IPv4 and IPv6 protocols are used. As a result, it is particularly useful in mobile backhaul networks where small packets are common. Table 2 summarizes the traffic mix observed on mobile backhaul networks around the world.
With the traffic mix shown in Table 2, combining intelligent header compression with IFG and preamble suppression delivers field-proven gains of 30% to 40%. That means service providers can transmit up to 40% more data in the same channel spacing, with the same antennas and with the same link availability. To take advantage of these gains, service providers should look for microwave systems that combine LDPC error correction with intelligent header compression:
- LDPC error correction codes enable service providers to transmit the maximum near-Shannon limit throughput in a given channel.
- Intelligent header compression increases channel capacity beyond the Shannon limit.
In microwave, spectrum is a limited resource. Every bit counts. To contact the authors or request additional information, please send an e-mail to firstname.lastname@example.org.
- “Boost Capacity in Microwave Networks with Advanced Packet Compression” by Paolo Volpato
- Based on real-world testing by an Alcatel-Lucent customer