Wednesday, January 20, 2010

MAC Address Authentication

MAC address authentication is not specified in the 802.11 specification, but it is supported by many vendors. MAC address authentication verifies the client's MAC address against a locally configured list of allowed addresses or against an external authentication server, as shown in Figure 4-11. MAC authentication augments the Open and Shared Key authentications provided by 802.11, potentially reducing the likelihood of unauthorized devices accessing the network. For example, a network administrator might want to limit a particular AP to just three specific devices. If all stations and APs in the BSS have the same WEP keys, it is difficult to use Open or Shared Key authentication to facilitate this scenario. The administrator can configure MAC address authentication to augment 802.11 authentication.


Security Vulnerabilities in the 802.11 Standard

The prior section detailed how 802.11 authentication and encryption operates. It is no secret that security in the 802.11 specification is flawed. Not long after the ratification of 802.11, a number of published papers pinpointed vulnerabilities in 802.11 authentication and WEP encryption.


Open Authentication Vulnerabilities

Open authentication provides no way for the AP to determine whether a client is valid. This lack is a security vulnerability if WEP encryption is not implemented in a WLAN. Even with static WEP enabled on the client and AP, Open authentication provides no means of determining who is using the WLAN device. An authorized device in the hands of an unauthorized user is just as much a network security threat as providing no security at all!


Shared Key Authentication Vulnerabilities

Shared key authentication requires the client to use a preshared WEP key to encrypt challenge text sent from the AP. The AP authenticates the client by decrypting the shared-key response and validating that the challenge text is the same. The process of exchanging the challenge text occurs over the wireless link and is vulnerable to a known plaintext attack. This vulnerability with Shared Key authentication relies on the mathematical principal behind encryption. Earlier in this chapter, encryption was defined as plaintext mixed with a key stream to produce ciphertext. The mixing process is a binary mathematical function known as an exclusive OR (XOR). If plaintext is mixed with corresponding ciphertext, the result of the function is the key stream for the WEP key and IV pair, as shown in Figure 4-12.


An eavesdropper can capture both the plaintext challenge text and the ciphertext response. By simply running the values through an XOR function, an eavesdropper has a valid key stream. The eavesdropper can then use the key stream to decrypt frames matching the same size as the key stream, given that the IV used to derive the key stream is the same as the encrypted frame. Figure 4-13 illustrates how an attacker can eavesdrop on a Shared Key authentication and derive the key stream.


MAC Address Authentication Vulnerabilities

MAC addresses are sent unencrypted in all 802.11 frames, as required by the 802.11 specification. As a result, WLANs that use MAC authentication are vulnerable to an attacker undermining the MAC authentication process by spoofing a valid MAC address.

MAC address spoofing is possible in 802.11 network interface cards (NICs) that allow the universally administered address (UAA) to be overwritten with a locally administered address (LAA). The UAA is the MAC address that is hard-coded on the NIC by the manufacturer. An attacker can use a protocol analyzer to determine a valid MAC address in the BSS and an LAA-compliant NIC to spoof the valid MAC address.


Static WEP Key Management Issues

The 802.11 specification does not specify key-management mechanisms. Although not a specific vulnerability, WEP is defined to support only static, preshared keys. Because 802.11 authentication authenticates a device and not the user of the device, the loss or theft of a wireless adapter becomes a security issue for the network. This issue presents network administrators with the tedious task of manually rekeying all wireless devices in the network when the existing key is compromised because an adapter was lost or stolen.

This risk might be acceptable for small deployments where managing user devices is a simple task. Such a prospect is not scalable for medium and large deployments where the number of wireless users can reach into the thousands. Without a mechanism to distribute or generate keys, administrators must keep close tabs on wireless NIC whereabouts.

Monday, January 4, 2010

Authentication Mechanisms in the 802.11 Standard

The 802.11 specification stipulates two mechanisms for authentication of WLAN clients:
  • Open authentication
  • Shared Key authentication

Open authentication is a null authentication algorithm. The AP grants any request for authentication. It might sound pointless at first to have such an algorithm defined, but Open authentication has its place in 802.11 network authentication. The requirements for
authentication allow devices to quickly gain access to the network.

Access control in Open authentication relies on the preconfigured WEP key on the client and AP. The client and AP must have matching WEP keys to enable them to communicate. If the client and AP do not have WEP enabled, there is no security in the BSS. Any device can join the BSS and all data frames are transmitted unencrypted.

After Open authentication and the association process, the client can begin transmitting and receiving data. If the client is configured with a key that differs from the key on the AP, the client will be unable to encrypt or decrypt data frames correctly, and the frames will be discarded by both the client and the AP. This process essentially provides a means of controlling access to the BSS. It is illustrated in Figure 4-9.


Unlike Open authentication, Shared Key authentication requires that the client station and the AP have WEP enabled and have matching WEP keys. The following summarizes the Shared Key authentication process:

1. The client sends an authentication request for Shared Key authentication to the AP.

2. The AP responds with a cleartext challenge frame.

3. The client encrypts the challenge and responds back to the AP.

4. If the AP can correctly decrypt the frame and retrieve the original challenge, the client is
sent a success message.

5. The client can access the WLAN.


The premise behind Shared Key authentication is similar to that of Open authentication with WEP keys as the access control means. The client and AP must have matching keys. The difference between the two schemes is that the client cannot associate in Shared Key authentication unless the correct key is configured. Figure 4-10 shows the Shared Key authentication process.

Wednesday, December 23, 2009

Encryption in the 802.11 Standard

The 802.11 specification provides data privacy with the WEP algorithm. WEP is based on the
RC4 symmetric stream cipher. The symmetric nature of RC4 requires that matching WEP keys, either 40 or 104 bits in length, must be statically configured on client devices and access points (APs). WEP was chosen primarily because of its low computational overhead. Although 802.11-enabled PCs are common today, this situation was not the case back in 1997. The majority of WLAN devices were application-specific devices (ASDs). Examples of ASDs include barcode scanners, tablet PCs, and 802.11-based phones. The applications that run on ASDs generally do not require much computational power, so as a result, ASDs have meager CPUs. WEP is a simple-to-implement algorithm that you can write in as few as 30 lines of code, in some cases. The low overhead incurred by WEP made it an ideal encryption algorithm to use on ASDs.

To avoid the ECB mode of encryption, WEP uses a 24-bit IV, which is concatenated to the key
before being processed by the RC4 cipher. Figure 4-5 shows a WEP-encrypted frame, including the IV.

The IV must change on a per-frame basis to avoid IV collisions. IV collisions occur when the same IV and WEP key are used, resulting in the same key stream being used to encrypt a
frame. This collision gives attackers a better opportunity to guess the plaintext data by
seeing similarities in the ciphertext. The point of using an IV is to prevent this scenario, so it is important to change the IV often. Most vendors offer per-frame IVs on their WLAN devices.


The 802.11 specification requires that matching WEP keys be statically configured on both
client and infrastructure devices. You can define up to four keys on a device, but you can use only one at a time for encrypting outbound frames. Figure 4-6 shows a Cisco Aironet client configuration screen for WEP configuration.



In addition to data encryption, the 802.11 specification provides for a 32-bit value that functions as an integrity check for the frame. This check tells the receiver that the frame has arrived without being corrupted during transmission. It augments the Layer 1 and Layer 2 frame check sequences (FCSs), which are designed to check for transmission-related errors.

The ICV is calculated against all fields in the frame using a cyclic redundancy check (CRC)-32 polynomial function. The sender calculates the values and places the result in the ICV field. The ICV is included in the WEP-encrypted portion of the frame, so it is not plainly visible to eavesdroppers. The frame receiver decrypts the frame, calculates an ICV value, and compares what it calculates against what has arrived in the ICV field. If the values match, the frame is considered to be genuine and untampered with. If they don't match, the frame is discarded. Figure 4-8 diagrams the ICV operation.

Wednesday, December 2, 2009

Overview of Encryption

Data encryption mechanisms are based on cipher algorithms that give data a randomized
appearance. Two type of ciphers exist:
  • Stream ciphers
  • Block ciphers

Both cipher types operate by generating a key stream from a secret key value. The key stream is mixed with the data, or plaintext, to produce the encrypted output, or ciphertext. The two cipher types differ in the size of the data they operate on at a time.

A stream cipher generates a continuous key stream based on the key value. For example, a stream cipher can generate a 15-byte key stream to encrypt one frame and a 200-byte key stream to encrypt another. Figure 4-2 illustrates stream cipher operation. Stream ciphers are small and efficient encryption algorithms and as a result do not incur extensive CPU usage. A commonly used stream cipher is RC4, which is the basis of the WEP algorithm.

Figure 4-3. Block Cipher Operation

The process of encryption described here for stream ciphers and block ciphers is known as Electronic Code Book (ECB) encryption mode. ECB mode encryption has the characteristic that the same plaintext input always generates the same ciphertext output. The input plaintext always produces the same ciphertext. This factor is a potential security threat because eavesdroppers can see patterns in the ciphertext and start making educated guesses about the original plaintext.

Some encryption techniques can overcome this issue:
  • Initialization vectors
  • Feedback modes

Initialization Vectors

An initialization vector (IV) is a number added to the key, which has the end result of altering the key stream. The IV is concatenated to the key before the key stream is generated. Every time the IV changes, so does the key stream. Figure 4-4 shows two scenarios. The first is stream cipher encryption without the use of an IV. In this case, the plain text DATA when mixed with the key stream 12345 always produces the ciphertext AHGHE. The second scenario shows the same plaintext mixed with the IV augmented key stream to generate different ciphertext. Note that the ciphertext output in the second scenario is different from the ciphertext output from the first. The 802.11 world recommends that you change the IV on a per-frame basis. This way, if the same frame is transmitted twice, it's highly probable that the resulting ciphertext is different for each frame.

Figure 4-4. Encryption and Initialization Vectors

Friday, November 13, 2009

802.11 Wireless LAN Security

Wireless Security

Imagine extending a long Ethernet cable from your internal network outside your office and laying it on the ground in the parking lot. Anyone who wants to use your network can simply plug into that network cable. Connecting unsecured WLANs to your internal network has the potential to offer the same opportunity.

802.11-based devices communicate with one another using radio frequencies (RFs) as the carrier signal for data. The data is broadcast from the sender in the hopes that the receiver is within RF range. The drawback to this mechanism is that any other station within range of the RF also receives the data.

Without a security mechanism of some sort, any 802.11 station can process the data sent on a WLAN, as long as that receiver is in RF range. To provide a minimum level of security in a WLAN, you need two components:
  • A means to decide who or what can use a WLAN— This requirement is satisfied by authentication mechanisms for LAN access control.
  • A means to provide privacy for the wireless data— The requirement is satisfied by encryption algorithms.
As Figure 4-1 depicts, wireless security consists of both authentication and encryption. Neither mechanism alone is enough to secure a wireless network.


The 802.11 specification defines Open and Shared Key authentication and WEP to provide device authentication and data privacy, respectively. The Open and Shared Key algorithms both rely on WEP encryption and possession of the WEP keys for access control. Because of the importance of WEP in 802.11 security, the following section focuses on the basics of encryption and ciphers in general.

Thursday, November 5, 2009

802.11g WLANs

The IEEE 802.11g standard, approved in June 2003, introduces an ERP to provide support for data rates up to 54 Mbps in the 2.4 GHz ISM band by borrowing from the OFDM techniques introduced by 802.11a. In contrast to 802.11a, it provides backward compatibility to 802.11b because 802.11g devices can fall back in data rate to the slower 802.11b speeds. Three modulation schemes are defined: ERP-ORFM, ERP-PBCC, and DSSS-OFDM. The ERP-OFDM form specifically provides mechanisms for 6, 9, 12, 18, 24, 36, 48, and 54 Mbps, with the 6, 12, and 24 Mbps data rates being mandatory, in addition to the 1, 2, 5.5, and 11 Mbps data rates. The standard also allows for optional PBCC modes at 22 and 33 Mbps as well as optional DSSS-OFDM modes at 6, 9, 12, 18, 24, 36, 48, and 54 Mbps. This section describes the changes necessary to form the ERP-OFDM, ERP-PBCC, and DSSS-OFDM.


802.11g PLCP

The 802.11g standard defines five PPDU formats: long preamble, short preamble, ERP-OFDM preamble, a long DSSS-OFDM preamble, and a short DSSS-OFDM preamble. Support for the first three is mandatory, but support for the latter two is optional. Table 3-16 summarizes the different preambles and the modulation schemes and data rates they support or are interoperable with.


The long preamble uses the same long preamble defined in the HR-DSSS but with the Service field modified as shown in Table 3-17.


The length extension bits determine the number of octets, when the 11 Mbps PBCC and 22 and 33 Mbps ERP-PBCC modes are in use.

The CCK-OFDM Long Preamble PPDU format appears in Figure 3-29. You set the rate subfield in the Signal to 3 Mbps. This setting ensures compatibility with non-ERP stations because they still read the length field and defer, despite not being able to demodulate the payload. The PLCP header matches that of the previously defined long preamble, but the preamble is the same as for the HR-DSSS. Both the preamble and the header are transmitted at 1 Mbps using DBPSK, and the PSDU is transmitted using the appropriate OFDM data rate. The header is scrambled using the HR-DSSS scrambler, and the data symbols are scrambled utilizing the 802.11a scrambler.


Much like the DSSS-OFDM long preamble, the short preamble DSSS-OFDM PPDU format uses the HR-DSSS short preamble and header at a 2 Mbps data rate. With the HR-DSSS scrambler and the data symbols, the short preamble and header are transmitted with OFDM and use the 802.11a scrambler.


ERP-OFDM

As previously stated, the ERP-OFDM provides a mechanism to use the 802.11a data rates in the ISM band in a manner that is backward compatible with DSSS and HR-DSSS. In addition to utilizing the 802.11a OFDM modulation under the 2.4 GHz frequency plan, ERP-OFDM also mandates that the transmit center frequency and symbol clock frequency are locked to the same oscillator, which was an option for DSSS. It utilizes a 20 microsecond slot time, but this time can be dropped to 9 microseconds if only ERP devices are found in the BSS.

Saturday, October 17, 2009

802.11a WLANs

At the same time that the 802.11b 1999 draft introduced HR-DSSS PHY, the 802.11a-1999 draft introduced the Orthogonal Frequency Division Multiplexing (OFDM) PHY for the 5 GHz band. It provided mandatory data rates up to 24 Mbps and optional rates up to 54 Mbps in the Unlicensed National Information Infrastructure (U-NII) bands of 5.15 to 5.25 GHz, 5.25 to 5.35 GHz, and 5.725 to 5.825 GHz. 802.11a utilizes 20 MHz channels and defines four channels in each of the three U-NII bands. This section provides you with the details to understand how to support OFDM.


802.11j

The IEEE 802.11j draft amendment for LAN/metropolitan-area networks (MAN) requirements provides for 802.11a type operation in the 4.9 GHz band allocated in Japan and in the U.S. for public safety applications as well as in the 5.03 to 5.091 GHz Japanese allocation. A channel numbering scheme uses channels 240 to 255 to cover these frequencies in 5 MHz channel increments.


OFDM Basics

Consider the simple QPSK symbol first introduced in the section, "Physical Layer Building Blocks," and then consider the transmission of two consecutive symbols. As these symbols travel through the transmission medium from the transmitter to the receiver, they experience distortions, and various parts of the signal can be delayed. If these delays are long enough, the first symbol might overlap in time with the second symbol. This overlapping is ISI. The time delay from the reception of the first instance of the signal until the last instance is referred to as the delay spread of the channel. You can also think of it as the amount of time that the first symbol spreads into the second. Traditionally, designers address ISI in one of two ways: employing symbols that are long enough to be decoded correctly in the presence of ISI or by equalizing to remove the distortion caused by the ISI. The former method limits the symbol rate to something less than the bandwidth of the channel, which is inversely proportional to the delay spread. As the bandwidth of the channel increases, you can increase the symbol rate, thereby achieving a higher end data rate. The latter method, often used in conjunction with the former, requires the use of ever more complicated and expensive methods to implement channel-equalization schemes to maximize the usable bandwidth of the channel.

Multichannel modulation schemes take a completely different approach. As a multichannel modulation designer, you break up the channel into small, independent, parallel or orthogonal transmission channels upon which narrowband signals, with a low symbol rate, are modulated, usually in the frequency domain, onto individual subcarriers. Similar to how you can modulate FHSS signal onto the appropriate carrier, you break the channel into N independent channels. For a given channel bandwidth, the larger the N that you choose, the longer the symbol period and the narrower the subchannel, so you can see that as the number of subchannels goes to infinity, the ISI goes to zero.

To build these independent symbols, a useful tool is the Fast Fourier Transform (FFT), which is an efficient implementation of a Discrete Fourier transform (DFT) and can convert a time domain signal to the frequency domain and vice versa. In the frequency domain, you generate N 4-QAM (Quadrature Amplitude Modulation) symbols, which are then converted to the time domain using an inverse FFT (IFFT). You should also know that making the size of the FFT a power of two allows for simple and efficient implementations. For that reason, OFDM systems usually pick N such that it is a power of two.

Without going into the intricacies of mathematics that are beyond the scope of this book, it simplifies the processing greatly if everything is done in the frequency domain using FFTs. To enable this processing at the receiver, however, the received signal must be a circular convolution of the input with the channel, as opposed to just a convolution. Convolution is a mathematical mechanism for passing a signal through a channel and determining the output. To ensure this property, you must take the time domain representation of an OFTM symbol and create a cyclic prefix by repeating the final n samples at the beginning. Figure 3-22 shows this process, where n is the length of the cyclic prefix and N is the size of the FFT in use.


Unlike some other multichannel modulation techniques, OFDM places an equal number of bits in all subchannels. In nonwireless applications such as asynchronous digital subscriber line (ADSL), where the channel is not as time varying, the transmitter uses knowledge of the channel and transmits more bits, or information, on those subcarriers that are less distorted or attenuated.

Wednesday, October 7, 2009

802.11b WLANs

The 802.11b 1999 draft introduced high-rate DSSS (HR-DSSS), which enables you to operate your WLAN at data rates up to and including 5.5 Mbps and 11 Mbps in the 2.4 GHz ISM band, using complementary code keying (CCK) or optionally packet binary convolutional coding (PBCC). HR-DSSS uses the same channelization scheme as DSSS with a 22 MHz bandwidth and 11 channels, 3 nonoverlapping, in the 2.4 GHz ISM band. This section provides you with the details to understand how these higher rates are supported.

802.11b HR-DSSS PLCP

The PLCP sublayer for HR-DSSS has two PPDU frame types: long and short. The preamble and header in the 802.11b HR-DSSS long PLCP are always transmitted at 1 Mbps to maintain backward compatibility with DSSS. In fact, the HR-DSSS long PLCP is the same as the DSSS
PLCP but with some extensions to support the higher data rates.


802.11b PMD-CCK Modulation

Although the spreading mechanism to achieve 5.5 Mbps and 11 Mbps with CCK is related to the techniques you employ for 1 and 2 Mbps, it is still unique. In both cases, you employ a spreading technique, but for CCK, the spreading code is actually an 8 complex chip code, where a 1 and 2 Mbps operation uses an 11-bit code. The 8-chip code is determined by either four or eight bits, depending upon the data rate. The chip rate is 11 Mchips/second, so with 8 complex chips per symbol and 4 or 8 bits per symbol, you achieve the data rates 5.5 Mbps and 11 Mbps.

To transmit at 5.5 Mbps, you take the scrambled PSDU bit stream and group it into symbols of 4 bits each: (b0, b1, b2, and b3). You use the latter two bits (b2, b3) to determine an 8 complex chip sequence, as shown in Table 3-11, where {c1, c2, c3, c4, c5, c6, c7, c8} represent the chips in the sequence. In Table 3-11, j represents the imaginary number, sqrt(-1), and appears on the imaginary or quadrature axis in the complex plane.

Now with the chip sequence determined by (b2, b3), you use the first two bits (b0, b1) to determine a DQPSK phase rotation that is applied to the sequence. Table 3-12 shows this process. You must also number each 4-bit symbol of the PSDU, starting with 0, so that you can determine whether you are mapping an odd or an even symbol according to the table. You will also note that you use DQPSK, not QPSK, and as such, these represent phase changes relative to the previous symbol or, in the case of the first symbol of the PSDU, relative to the last symbol of the preceding 2 Mbps DQPSK symbol.

Apply this phase rotation to the 8 complex chip symbol and then modulate that to the appropriate carrier frequency.

PBCC Modulation

As already indicated, the HR-DSSS standard also defines an optional PBCC modulation mechanism for generating 5.5 Mbps and 11 Mbps data rates. This scheme is a bit different from both CCK and 802.11 DSSS. You first pass the scrambled PSDU bits through a half-rate binary convolution encoder, which was first introduced in the section, "Physical Layer Building Blocks." The particular half-rate encoder has six delay, or memory elements, and outputs 2 bits for every 1 input bit. Because 802.11 works under a frame structure and convolutional encoders have memory, you must zero all the delay elements at the beginning of a frame and append one octet of zeros at the end of the frame to ensure all bits are equally protected. This final octet explains why the length calculation, discussed in the section, "802.11b HRDSSS PLCP," is slightly different for CCK and PLCC. You then pass the encoded bit stream through a BPSK symbol mapper to achieve the 5.5 Mbps data rate or through a QPSK symbol mapper to achieve the 11 Mbps data rate. (You do not employ differential encoding here.) The particular symbol mapping you use depends upon the binary value, s, coming out of a 256-bit pseudo-random cover sequence. The two QPSK symbol mappings appear in Figure 3-19, and the two BPSK symbol mappings appear in Figure 3-20. For PSDUs longer than 256 bits, the pseudo-random sequence is merely repeated.

Monday, September 28, 2009

802.11 Wireless LANs

The original 802.11 standard defined two WLAN PHY methods:
  • 2.4 GHz frequency hopping spread spectrum (FHSS)
  • 2.4 GHz direct sequence spread spectrum (DSSS)

Frequency Hopping WLANs

FHSS WLANs support 1 Mbps and 2 Mbps data rates. As the name implies, a FHSS device changes or "hops" frequencies with a predetermined hopping pattern and a set rate, as depicted in Figure 3-8. FHSS devices split the available spectrum into 79 nonoverlapping channels (for North America and most of Europe) across the 2.402 to 2.480 GHz frequency range. Each of the 79 channels is 1 MHz wide, so FHSS WLANs use a relatively fast 1 MHz symbol rate and hop among the 79 channels at a much slower rate.


The hopping sequence must hop at a minimum rate of 2.5 times per second and must contain a minimum of six channels (6 MHz). To minimize the collisions between overlapping coverage areas, the possible hopping sequences can be broken down into three sets of length, 26 for use in North America and most of Europe. Tables 3-1 through 3-4 show the minimum overlap hopping patterns for different countries, including the U.S., Japan, Spain, and France.



In essence, the hopping patterns provide a slow path through the possible channels in such a way that each hop covers at least 6 MHz and, when considering a multicell deployment, minimizes the probability of a collision. The reduced set length for countries such as Japan, Spain, and France results from the smaller ISM band frequency allocation at 2.4 GHz.


FHSS PLCP

After the MAC layer passes a MAC frame, also known as a PLCP service data unit (PSDU) in FHSS WLANs, to the PLCP sublayer, the PLCP adds two fields to the beginning of the frame to form a PPDU frame. Figure 3-9 shows the FHSS PLCP frame format.


Direct Sequence Spread Spectrum WLANs

DSSS is another physical layer for the 802.11 specifications. As defined in the 1997 802.11 standard, DSSS supports data rates of 1 and 2 Mbps. In 1999, the 802.11 Working Group ratified the 802.11b standard to support data rates of 5.5 and 11 Mbps. The 802.11b DSSS physical layer is compatible with existing 802.11 DSSS WLANs. The PLCP for 802.11b DSSS is the same as that for 802.11 DSSS, with the addition of an optional short preamble and short header.



802.11 DSSS

Similar to the PLCP sublayer for FHSS, the PLCP for 802.11 DSSS adds two fields to the MAC frame to form the PPDU: the PLCP preamble and PLCP header. The frame format appears in Figure 3-14.


DSSS Basics

Spread-spectrum techniques take a modulation approach that uses a much higher than necessary spectrum bandwidth to communicate information at a much lower rate. Each bit is replaced or spread by a wideband spreading code. Much like coding, because the information is spread into many more information bits, it has the ability to operate in low signal-to-noise ratio (SNR) conditions, either because of interference or low transmitter power. With DSSS, the transmitted signal is directly multiplied by a spreading sequence, shared by the transmitter and receiver.

Friday, September 18, 2009

Physical Layer Building Blocks

To understand the different PMDs that each 802.11 PHY provides, you must first understand the following basic PHY concepts and building blocks:
  • Scrambling
  • Coding
  • Interleaving
  • Symbol mapping and modulation

Scrambling

One of the foundations of modern transmitter design that enables the transfer of data at high speeds is the assumption that the data you provide appears to be random from the transmitter's perspective. Without this assumption, many of the gains made from the other building blocks would not be realized. However, it is conceivable and actually common for you to receive data that is not at all random and might, in fact, contain repeatable patterns or long sequences of 1s or 0s. Scrambling is a method for making the data you receive look more random by performing a mapping between bit sequences, from structured to seemingly random sequences. It is also referred to as whitening the data stream. The receiver descrambler then remaps these random sequences into their original structured sequence. Most scrambling methods are self-synchronizing, meaning that the descrambler is able to sync itself to the state of the scrambler.


Coding

Although scrambling is an important tool that has allowed engineers to develop communications systems with higher spectral efficiency, coding is the mechanism that has enabled the high-speed transmission of data over noisy channels. All transmission channels are noisy, which introduces errors in the form of corrupted or modified bits. Coding allows you to maximize the amount of data that you send over a noisy communication medium. You can do so by replacing sequences of bits with longer sequences that allow you to recognize and correct a corrupted bit. For example, as shown in Figure 3-3, if you want to communicate the sequence 01101 over the telephone to your friend, you might instead agree with your friend that you will repeat each bit three times, resulting in the sequence 000111111000111. Even if your friend mistook some of the bits at his end—resulting in the sequence 100111111000101, with the second to last bit being corrupted—he would recognize that the original sequence was 01101 via a majority voting scheme. Although this coder is rather simple and not efficient, you now understand the concept behind coding.


The most common type of coding in communications systems today is the convolutional coder because it can be implemented rather easily in hardware with delays and adders. In contrast to the preceding code, which is a memory-less block code, the convolutional code is a finite memory code, meaning that the output is a function not just of the current input, but also of several of the past inputs. The constraint length of a code indicates how long it takes in output units for an input to fall out of the system. Codes are often described through their rate. You might see a rate 1/2 convolutional coder. This rate indicates that for every one input bit, two output bits are produced. When comparing coders, note that although higher rate codes support communication at higher data rates, they are also correspondingly more sensitive to noise.


Interleaving

One of the base assumptions of coding is that errors introduced in the transmission of information are independent events. This assumption is the case in the earlier example where you were communicating a sequence of bits over the phone to your friend and bits 1 and 9 were corrupted. However, you might often find that bit errors are not independent and that they occur in batches. In the previous example, suppose a dump truck drove by during the first part of your conversation, thereby interfering with your friend's ability to hear you correctly. The sequence your friend received might look like 011001111000111, as shown in Figure 3-4. He would erroneously conclude that the original sequence was 10101.


For this reason, interleavers were introduced to spread out the bits in block errors that might occur, thus making them look more independent. An interleaver can be either a software or hardware construct; regardless, its main purpose is to spread out adjacent bits by placing nonadjacent bits between them. Working with the same example, instead of just reading the 16-bit sequence to your friend, you might enter the bits five at a time into the rows of a matrix and then read them out as columns three bits at a time, as shown in Figure 3-5. Your friend would then write them into a matrix in columns three bits at a time, read them out in rows five bits at a time, and apply the coding rule to retrieve the original sequence.


Symbol Mapping and Modulation

The modulation process applies the bit stream to a carrier at the operating frequency band. Think of the carrier as a simple sine wave; the modulation process can be applied to the amplitude, the frequency, or the phase. Figure 3-6 provides an example of each of these techniques.