Wednesday, December 23, 2009

Encryption in the 802.11 Standard

The 802.11 specification provides data privacy with the WEP algorithm. WEP is based on the
RC4 symmetric stream cipher. The symmetric nature of RC4 requires that matching WEP keys, either 40 or 104 bits in length, must be statically configured on client devices and access points (APs). WEP was chosen primarily because of its low computational overhead. Although 802.11-enabled PCs are common today, this situation was not the case back in 1997. The majority of WLAN devices were application-specific devices (ASDs). Examples of ASDs include barcode scanners, tablet PCs, and 802.11-based phones. The applications that run on ASDs generally do not require much computational power, so as a result, ASDs have meager CPUs. WEP is a simple-to-implement algorithm that you can write in as few as 30 lines of code, in some cases. The low overhead incurred by WEP made it an ideal encryption algorithm to use on ASDs.

To avoid the ECB mode of encryption, WEP uses a 24-bit IV, which is concatenated to the key
before being processed by the RC4 cipher. Figure 4-5 shows a WEP-encrypted frame, including the IV.

The IV must change on a per-frame basis to avoid IV collisions. IV collisions occur when the same IV and WEP key are used, resulting in the same key stream being used to encrypt a
frame. This collision gives attackers a better opportunity to guess the plaintext data by
seeing similarities in the ciphertext. The point of using an IV is to prevent this scenario, so it is important to change the IV often. Most vendors offer per-frame IVs on their WLAN devices.


The 802.11 specification requires that matching WEP keys be statically configured on both
client and infrastructure devices. You can define up to four keys on a device, but you can use only one at a time for encrypting outbound frames. Figure 4-6 shows a Cisco Aironet client configuration screen for WEP configuration.



In addition to data encryption, the 802.11 specification provides for a 32-bit value that functions as an integrity check for the frame. This check tells the receiver that the frame has arrived without being corrupted during transmission. It augments the Layer 1 and Layer 2 frame check sequences (FCSs), which are designed to check for transmission-related errors.

The ICV is calculated against all fields in the frame using a cyclic redundancy check (CRC)-32 polynomial function. The sender calculates the values and places the result in the ICV field. The ICV is included in the WEP-encrypted portion of the frame, so it is not plainly visible to eavesdroppers. The frame receiver decrypts the frame, calculates an ICV value, and compares what it calculates against what has arrived in the ICV field. If the values match, the frame is considered to be genuine and untampered with. If they don't match, the frame is discarded. Figure 4-8 diagrams the ICV operation.

Wednesday, December 2, 2009

Overview of Encryption

Data encryption mechanisms are based on cipher algorithms that give data a randomized
appearance. Two type of ciphers exist:
  • Stream ciphers
  • Block ciphers

Both cipher types operate by generating a key stream from a secret key value. The key stream is mixed with the data, or plaintext, to produce the encrypted output, or ciphertext. The two cipher types differ in the size of the data they operate on at a time.

A stream cipher generates a continuous key stream based on the key value. For example, a stream cipher can generate a 15-byte key stream to encrypt one frame and a 200-byte key stream to encrypt another. Figure 4-2 illustrates stream cipher operation. Stream ciphers are small and efficient encryption algorithms and as a result do not incur extensive CPU usage. A commonly used stream cipher is RC4, which is the basis of the WEP algorithm.

Figure 4-3. Block Cipher Operation

The process of encryption described here for stream ciphers and block ciphers is known as Electronic Code Book (ECB) encryption mode. ECB mode encryption has the characteristic that the same plaintext input always generates the same ciphertext output. The input plaintext always produces the same ciphertext. This factor is a potential security threat because eavesdroppers can see patterns in the ciphertext and start making educated guesses about the original plaintext.

Some encryption techniques can overcome this issue:
  • Initialization vectors
  • Feedback modes

Initialization Vectors

An initialization vector (IV) is a number added to the key, which has the end result of altering the key stream. The IV is concatenated to the key before the key stream is generated. Every time the IV changes, so does the key stream. Figure 4-4 shows two scenarios. The first is stream cipher encryption without the use of an IV. In this case, the plain text DATA when mixed with the key stream 12345 always produces the ciphertext AHGHE. The second scenario shows the same plaintext mixed with the IV augmented key stream to generate different ciphertext. Note that the ciphertext output in the second scenario is different from the ciphertext output from the first. The 802.11 world recommends that you change the IV on a per-frame basis. This way, if the same frame is transmitted twice, it's highly probable that the resulting ciphertext is different for each frame.

Figure 4-4. Encryption and Initialization Vectors

Friday, November 13, 2009

802.11 Wireless LAN Security

Wireless Security

Imagine extending a long Ethernet cable from your internal network outside your office and laying it on the ground in the parking lot. Anyone who wants to use your network can simply plug into that network cable. Connecting unsecured WLANs to your internal network has the potential to offer the same opportunity.

802.11-based devices communicate with one another using radio frequencies (RFs) as the carrier signal for data. The data is broadcast from the sender in the hopes that the receiver is within RF range. The drawback to this mechanism is that any other station within range of the RF also receives the data.

Without a security mechanism of some sort, any 802.11 station can process the data sent on a WLAN, as long as that receiver is in RF range. To provide a minimum level of security in a WLAN, you need two components:
  • A means to decide who or what can use a WLAN— This requirement is satisfied by authentication mechanisms for LAN access control.
  • A means to provide privacy for the wireless data— The requirement is satisfied by encryption algorithms.
As Figure 4-1 depicts, wireless security consists of both authentication and encryption. Neither mechanism alone is enough to secure a wireless network.


The 802.11 specification defines Open and Shared Key authentication and WEP to provide device authentication and data privacy, respectively. The Open and Shared Key algorithms both rely on WEP encryption and possession of the WEP keys for access control. Because of the importance of WEP in 802.11 security, the following section focuses on the basics of encryption and ciphers in general.

Thursday, November 5, 2009

802.11g WLANs

The IEEE 802.11g standard, approved in June 2003, introduces an ERP to provide support for data rates up to 54 Mbps in the 2.4 GHz ISM band by borrowing from the OFDM techniques introduced by 802.11a. In contrast to 802.11a, it provides backward compatibility to 802.11b because 802.11g devices can fall back in data rate to the slower 802.11b speeds. Three modulation schemes are defined: ERP-ORFM, ERP-PBCC, and DSSS-OFDM. The ERP-OFDM form specifically provides mechanisms for 6, 9, 12, 18, 24, 36, 48, and 54 Mbps, with the 6, 12, and 24 Mbps data rates being mandatory, in addition to the 1, 2, 5.5, and 11 Mbps data rates. The standard also allows for optional PBCC modes at 22 and 33 Mbps as well as optional DSSS-OFDM modes at 6, 9, 12, 18, 24, 36, 48, and 54 Mbps. This section describes the changes necessary to form the ERP-OFDM, ERP-PBCC, and DSSS-OFDM.


802.11g PLCP

The 802.11g standard defines five PPDU formats: long preamble, short preamble, ERP-OFDM preamble, a long DSSS-OFDM preamble, and a short DSSS-OFDM preamble. Support for the first three is mandatory, but support for the latter two is optional. Table 3-16 summarizes the different preambles and the modulation schemes and data rates they support or are interoperable with.


The long preamble uses the same long preamble defined in the HR-DSSS but with the Service field modified as shown in Table 3-17.


The length extension bits determine the number of octets, when the 11 Mbps PBCC and 22 and 33 Mbps ERP-PBCC modes are in use.

The CCK-OFDM Long Preamble PPDU format appears in Figure 3-29. You set the rate subfield in the Signal to 3 Mbps. This setting ensures compatibility with non-ERP stations because they still read the length field and defer, despite not being able to demodulate the payload. The PLCP header matches that of the previously defined long preamble, but the preamble is the same as for the HR-DSSS. Both the preamble and the header are transmitted at 1 Mbps using DBPSK, and the PSDU is transmitted using the appropriate OFDM data rate. The header is scrambled using the HR-DSSS scrambler, and the data symbols are scrambled utilizing the 802.11a scrambler.


Much like the DSSS-OFDM long preamble, the short preamble DSSS-OFDM PPDU format uses the HR-DSSS short preamble and header at a 2 Mbps data rate. With the HR-DSSS scrambler and the data symbols, the short preamble and header are transmitted with OFDM and use the 802.11a scrambler.


ERP-OFDM

As previously stated, the ERP-OFDM provides a mechanism to use the 802.11a data rates in the ISM band in a manner that is backward compatible with DSSS and HR-DSSS. In addition to utilizing the 802.11a OFDM modulation under the 2.4 GHz frequency plan, ERP-OFDM also mandates that the transmit center frequency and symbol clock frequency are locked to the same oscillator, which was an option for DSSS. It utilizes a 20 microsecond slot time, but this time can be dropped to 9 microseconds if only ERP devices are found in the BSS.

Saturday, October 17, 2009

802.11a WLANs

At the same time that the 802.11b 1999 draft introduced HR-DSSS PHY, the 802.11a-1999 draft introduced the Orthogonal Frequency Division Multiplexing (OFDM) PHY for the 5 GHz band. It provided mandatory data rates up to 24 Mbps and optional rates up to 54 Mbps in the Unlicensed National Information Infrastructure (U-NII) bands of 5.15 to 5.25 GHz, 5.25 to 5.35 GHz, and 5.725 to 5.825 GHz. 802.11a utilizes 20 MHz channels and defines four channels in each of the three U-NII bands. This section provides you with the details to understand how to support OFDM.


802.11j

The IEEE 802.11j draft amendment for LAN/metropolitan-area networks (MAN) requirements provides for 802.11a type operation in the 4.9 GHz band allocated in Japan and in the U.S. for public safety applications as well as in the 5.03 to 5.091 GHz Japanese allocation. A channel numbering scheme uses channels 240 to 255 to cover these frequencies in 5 MHz channel increments.


OFDM Basics

Consider the simple QPSK symbol first introduced in the section, "Physical Layer Building Blocks," and then consider the transmission of two consecutive symbols. As these symbols travel through the transmission medium from the transmitter to the receiver, they experience distortions, and various parts of the signal can be delayed. If these delays are long enough, the first symbol might overlap in time with the second symbol. This overlapping is ISI. The time delay from the reception of the first instance of the signal until the last instance is referred to as the delay spread of the channel. You can also think of it as the amount of time that the first symbol spreads into the second. Traditionally, designers address ISI in one of two ways: employing symbols that are long enough to be decoded correctly in the presence of ISI or by equalizing to remove the distortion caused by the ISI. The former method limits the symbol rate to something less than the bandwidth of the channel, which is inversely proportional to the delay spread. As the bandwidth of the channel increases, you can increase the symbol rate, thereby achieving a higher end data rate. The latter method, often used in conjunction with the former, requires the use of ever more complicated and expensive methods to implement channel-equalization schemes to maximize the usable bandwidth of the channel.

Multichannel modulation schemes take a completely different approach. As a multichannel modulation designer, you break up the channel into small, independent, parallel or orthogonal transmission channels upon which narrowband signals, with a low symbol rate, are modulated, usually in the frequency domain, onto individual subcarriers. Similar to how you can modulate FHSS signal onto the appropriate carrier, you break the channel into N independent channels. For a given channel bandwidth, the larger the N that you choose, the longer the symbol period and the narrower the subchannel, so you can see that as the number of subchannels goes to infinity, the ISI goes to zero.

To build these independent symbols, a useful tool is the Fast Fourier Transform (FFT), which is an efficient implementation of a Discrete Fourier transform (DFT) and can convert a time domain signal to the frequency domain and vice versa. In the frequency domain, you generate N 4-QAM (Quadrature Amplitude Modulation) symbols, which are then converted to the time domain using an inverse FFT (IFFT). You should also know that making the size of the FFT a power of two allows for simple and efficient implementations. For that reason, OFDM systems usually pick N such that it is a power of two.

Without going into the intricacies of mathematics that are beyond the scope of this book, it simplifies the processing greatly if everything is done in the frequency domain using FFTs. To enable this processing at the receiver, however, the received signal must be a circular convolution of the input with the channel, as opposed to just a convolution. Convolution is a mathematical mechanism for passing a signal through a channel and determining the output. To ensure this property, you must take the time domain representation of an OFTM symbol and create a cyclic prefix by repeating the final n samples at the beginning. Figure 3-22 shows this process, where n is the length of the cyclic prefix and N is the size of the FFT in use.


Unlike some other multichannel modulation techniques, OFDM places an equal number of bits in all subchannels. In nonwireless applications such as asynchronous digital subscriber line (ADSL), where the channel is not as time varying, the transmitter uses knowledge of the channel and transmits more bits, or information, on those subcarriers that are less distorted or attenuated.

Wednesday, October 7, 2009

802.11b WLANs

The 802.11b 1999 draft introduced high-rate DSSS (HR-DSSS), which enables you to operate your WLAN at data rates up to and including 5.5 Mbps and 11 Mbps in the 2.4 GHz ISM band, using complementary code keying (CCK) or optionally packet binary convolutional coding (PBCC). HR-DSSS uses the same channelization scheme as DSSS with a 22 MHz bandwidth and 11 channels, 3 nonoverlapping, in the 2.4 GHz ISM band. This section provides you with the details to understand how these higher rates are supported.

802.11b HR-DSSS PLCP

The PLCP sublayer for HR-DSSS has two PPDU frame types: long and short. The preamble and header in the 802.11b HR-DSSS long PLCP are always transmitted at 1 Mbps to maintain backward compatibility with DSSS. In fact, the HR-DSSS long PLCP is the same as the DSSS
PLCP but with some extensions to support the higher data rates.


802.11b PMD-CCK Modulation

Although the spreading mechanism to achieve 5.5 Mbps and 11 Mbps with CCK is related to the techniques you employ for 1 and 2 Mbps, it is still unique. In both cases, you employ a spreading technique, but for CCK, the spreading code is actually an 8 complex chip code, where a 1 and 2 Mbps operation uses an 11-bit code. The 8-chip code is determined by either four or eight bits, depending upon the data rate. The chip rate is 11 Mchips/second, so with 8 complex chips per symbol and 4 or 8 bits per symbol, you achieve the data rates 5.5 Mbps and 11 Mbps.

To transmit at 5.5 Mbps, you take the scrambled PSDU bit stream and group it into symbols of 4 bits each: (b0, b1, b2, and b3). You use the latter two bits (b2, b3) to determine an 8 complex chip sequence, as shown in Table 3-11, where {c1, c2, c3, c4, c5, c6, c7, c8} represent the chips in the sequence. In Table 3-11, j represents the imaginary number, sqrt(-1), and appears on the imaginary or quadrature axis in the complex plane.

Now with the chip sequence determined by (b2, b3), you use the first two bits (b0, b1) to determine a DQPSK phase rotation that is applied to the sequence. Table 3-12 shows this process. You must also number each 4-bit symbol of the PSDU, starting with 0, so that you can determine whether you are mapping an odd or an even symbol according to the table. You will also note that you use DQPSK, not QPSK, and as such, these represent phase changes relative to the previous symbol or, in the case of the first symbol of the PSDU, relative to the last symbol of the preceding 2 Mbps DQPSK symbol.

Apply this phase rotation to the 8 complex chip symbol and then modulate that to the appropriate carrier frequency.

PBCC Modulation

As already indicated, the HR-DSSS standard also defines an optional PBCC modulation mechanism for generating 5.5 Mbps and 11 Mbps data rates. This scheme is a bit different from both CCK and 802.11 DSSS. You first pass the scrambled PSDU bits through a half-rate binary convolution encoder, which was first introduced in the section, "Physical Layer Building Blocks." The particular half-rate encoder has six delay, or memory elements, and outputs 2 bits for every 1 input bit. Because 802.11 works under a frame structure and convolutional encoders have memory, you must zero all the delay elements at the beginning of a frame and append one octet of zeros at the end of the frame to ensure all bits are equally protected. This final octet explains why the length calculation, discussed in the section, "802.11b HRDSSS PLCP," is slightly different for CCK and PLCC. You then pass the encoded bit stream through a BPSK symbol mapper to achieve the 5.5 Mbps data rate or through a QPSK symbol mapper to achieve the 11 Mbps data rate. (You do not employ differential encoding here.) The particular symbol mapping you use depends upon the binary value, s, coming out of a 256-bit pseudo-random cover sequence. The two QPSK symbol mappings appear in Figure 3-19, and the two BPSK symbol mappings appear in Figure 3-20. For PSDUs longer than 256 bits, the pseudo-random sequence is merely repeated.

Monday, September 28, 2009

802.11 Wireless LANs

The original 802.11 standard defined two WLAN PHY methods:
  • 2.4 GHz frequency hopping spread spectrum (FHSS)
  • 2.4 GHz direct sequence spread spectrum (DSSS)

Frequency Hopping WLANs

FHSS WLANs support 1 Mbps and 2 Mbps data rates. As the name implies, a FHSS device changes or "hops" frequencies with a predetermined hopping pattern and a set rate, as depicted in Figure 3-8. FHSS devices split the available spectrum into 79 nonoverlapping channels (for North America and most of Europe) across the 2.402 to 2.480 GHz frequency range. Each of the 79 channels is 1 MHz wide, so FHSS WLANs use a relatively fast 1 MHz symbol rate and hop among the 79 channels at a much slower rate.


The hopping sequence must hop at a minimum rate of 2.5 times per second and must contain a minimum of six channels (6 MHz). To minimize the collisions between overlapping coverage areas, the possible hopping sequences can be broken down into three sets of length, 26 for use in North America and most of Europe. Tables 3-1 through 3-4 show the minimum overlap hopping patterns for different countries, including the U.S., Japan, Spain, and France.



In essence, the hopping patterns provide a slow path through the possible channels in such a way that each hop covers at least 6 MHz and, when considering a multicell deployment, minimizes the probability of a collision. The reduced set length for countries such as Japan, Spain, and France results from the smaller ISM band frequency allocation at 2.4 GHz.


FHSS PLCP

After the MAC layer passes a MAC frame, also known as a PLCP service data unit (PSDU) in FHSS WLANs, to the PLCP sublayer, the PLCP adds two fields to the beginning of the frame to form a PPDU frame. Figure 3-9 shows the FHSS PLCP frame format.


Direct Sequence Spread Spectrum WLANs

DSSS is another physical layer for the 802.11 specifications. As defined in the 1997 802.11 standard, DSSS supports data rates of 1 and 2 Mbps. In 1999, the 802.11 Working Group ratified the 802.11b standard to support data rates of 5.5 and 11 Mbps. The 802.11b DSSS physical layer is compatible with existing 802.11 DSSS WLANs. The PLCP for 802.11b DSSS is the same as that for 802.11 DSSS, with the addition of an optional short preamble and short header.



802.11 DSSS

Similar to the PLCP sublayer for FHSS, the PLCP for 802.11 DSSS adds two fields to the MAC frame to form the PPDU: the PLCP preamble and PLCP header. The frame format appears in Figure 3-14.


DSSS Basics

Spread-spectrum techniques take a modulation approach that uses a much higher than necessary spectrum bandwidth to communicate information at a much lower rate. Each bit is replaced or spread by a wideband spreading code. Much like coding, because the information is spread into many more information bits, it has the ability to operate in low signal-to-noise ratio (SNR) conditions, either because of interference or low transmitter power. With DSSS, the transmitted signal is directly multiplied by a spreading sequence, shared by the transmitter and receiver.

Friday, September 18, 2009

Physical Layer Building Blocks

To understand the different PMDs that each 802.11 PHY provides, you must first understand the following basic PHY concepts and building blocks:
  • Scrambling
  • Coding
  • Interleaving
  • Symbol mapping and modulation

Scrambling

One of the foundations of modern transmitter design that enables the transfer of data at high speeds is the assumption that the data you provide appears to be random from the transmitter's perspective. Without this assumption, many of the gains made from the other building blocks would not be realized. However, it is conceivable and actually common for you to receive data that is not at all random and might, in fact, contain repeatable patterns or long sequences of 1s or 0s. Scrambling is a method for making the data you receive look more random by performing a mapping between bit sequences, from structured to seemingly random sequences. It is also referred to as whitening the data stream. The receiver descrambler then remaps these random sequences into their original structured sequence. Most scrambling methods are self-synchronizing, meaning that the descrambler is able to sync itself to the state of the scrambler.


Coding

Although scrambling is an important tool that has allowed engineers to develop communications systems with higher spectral efficiency, coding is the mechanism that has enabled the high-speed transmission of data over noisy channels. All transmission channels are noisy, which introduces errors in the form of corrupted or modified bits. Coding allows you to maximize the amount of data that you send over a noisy communication medium. You can do so by replacing sequences of bits with longer sequences that allow you to recognize and correct a corrupted bit. For example, as shown in Figure 3-3, if you want to communicate the sequence 01101 over the telephone to your friend, you might instead agree with your friend that you will repeat each bit three times, resulting in the sequence 000111111000111. Even if your friend mistook some of the bits at his end—resulting in the sequence 100111111000101, with the second to last bit being corrupted—he would recognize that the original sequence was 01101 via a majority voting scheme. Although this coder is rather simple and not efficient, you now understand the concept behind coding.


The most common type of coding in communications systems today is the convolutional coder because it can be implemented rather easily in hardware with delays and adders. In contrast to the preceding code, which is a memory-less block code, the convolutional code is a finite memory code, meaning that the output is a function not just of the current input, but also of several of the past inputs. The constraint length of a code indicates how long it takes in output units for an input to fall out of the system. Codes are often described through their rate. You might see a rate 1/2 convolutional coder. This rate indicates that for every one input bit, two output bits are produced. When comparing coders, note that although higher rate codes support communication at higher data rates, they are also correspondingly more sensitive to noise.


Interleaving

One of the base assumptions of coding is that errors introduced in the transmission of information are independent events. This assumption is the case in the earlier example where you were communicating a sequence of bits over the phone to your friend and bits 1 and 9 were corrupted. However, you might often find that bit errors are not independent and that they occur in batches. In the previous example, suppose a dump truck drove by during the first part of your conversation, thereby interfering with your friend's ability to hear you correctly. The sequence your friend received might look like 011001111000111, as shown in Figure 3-4. He would erroneously conclude that the original sequence was 10101.


For this reason, interleavers were introduced to spread out the bits in block errors that might occur, thus making them look more independent. An interleaver can be either a software or hardware construct; regardless, its main purpose is to spread out adjacent bits by placing nonadjacent bits between them. Working with the same example, instead of just reading the 16-bit sequence to your friend, you might enter the bits five at a time into the rows of a matrix and then read them out as columns three bits at a time, as shown in Figure 3-5. Your friend would then write them into a matrix in columns three bits at a time, read them out in rows five bits at a time, and apply the coding rule to retrieve the original sequence.


Symbol Mapping and Modulation

The modulation process applies the bit stream to a carrier at the operating frequency band. Think of the carrier as a simple sine wave; the modulation process can be applied to the amplitude, the frequency, or the phase. Figure 3-6 provides an example of each of these techniques.

Monday, August 24, 2009

802.11 Physical Layer Technologies

The ratification of the 1999 802.11a and 802.11b standards transformed wireless LAN (WLAN) technology from a niche solution for the likes of barcode scanners to a generalized solution for portable, low-priced, interoperable network access. Today, many vendors offer 802.11a and 802.11b clients and access points that provide performance comparable to wired Ethernet. The lack of a wired network connection gives users the freedom to be mobile as they use their devices. Although standardization has been key, the use of unlicensed frequencies, where a costly and time-consuming licensing process is not required, has also contributed to a rapid and pervasive spread of the technology.

802.11 as a standards body actually defined a number of different physical layer (PHY) technologies to be used with the 802.11 MAC. This chapter examines each of these 802.11 PHYs, including the following:
  • The 802.11 2.4 GHz frequency hopping PHY
  • The 802.11 2.4 GHz direct sequencing PHY
  • The 802.11b 2.4 GHz direct sequencing PHY
  • The 802.11a 5 GHz Orthogonal Frequency Division Multiplexing (OFDM) PHY
  • The 802.11g 2.4 GHz extended rate physical (ERP) layer
802.3 Ethernet has evolved over the years to include 802.3u Fast Ethernet and 802.3z/802.3ab Gigabit Ethernet. In much the same way, 802.11 wireless Ethernet is evolving with 802.11b high-rate direct sequence spread spectrum (HR-DSSS) and 802.11a OFDM standards and the recent addition of the 802.11g ERP. In fact, the physical layer for each 802.11 type is the main differentiator between them.


Wireless Physical Layer Concepts

The 802.11 PHYs essentially provide wireless transmission mechanisms for the MAC, in addition to supporting secondary functions such as assessing the state of the wireless medium and reporting it to the MAC. By providing these transmission mechanisms independently of the MAC, 802.11 has developed advances in both the MAC and the PHY, as long as the interface is maintained. This independence between the MAC and PHY is what has enabled the addition of the higher data rate 802.11b, 802.11a, and 802.11g PHYs. In fact, the MAC layer for each of the 802.11 PHYs is the same.

Each of the 802.11 physical layers has two sublayers:
  • Physical Layer Convergence Procedure (PLCP)
  • Physical Medium Dependant (PMD)
Figure 3-1 shows how the sublayers are oriented with respect to each other and the upper layers.

The PLCP is essentially a handshaking layer that enables MAC protocol data units (MPDUs) to be transferred between MAC stations over the PMD, which is the method of transmitting and receiving data through the wireless medium. In a sense, you can think of the PMD as a wireless transmission service function that is interfaced via the PLCP. The PLCP and PMD sublayers vary based on 802.11 types.

All PLCPs, regardless of 802.11 PHY type, have data primitives that provide the interface for the transfer of data octets between the MAC and the PMD. In addition, they provide primitives that enable the MAC to tell the PHY when to commence transmission and the PHY to tell the MAC when it has completed its transmission. On the receive side, PLCP primitives from the PHY to the MAC indicate when it has started to receive a transmission from another station and when that transmission is complete. To support the clear channel assessment (CCA) function, all PLCPs provide a mechanism for the MAC to reset the PHY CCA engine and for the PHY to report the current status of the wireless medium.

In general, the 802.11 PLCPs operate according to the state diagram in Figure 3-2. Their basic operating state is the carrier sense/clear channel assessment (CS/CCA) procedure. This procedure detects the start of a signal from a different station and determines whether the channel is clear for transmitting. Upon receiving a Tx Start request, it transitions to the Transmit state by switching the PMD from receive to transmit and sends the PLCP protocol
data unit (PPDU). Then, it issues a Tx End and returns to the CA/CCA state. The PLCP invokes the Receive state when the CS/CCA procedure detects the PLCP preamble and valid PLCP header. If the PLCP detects an error, it indicates the error to the MAC and proceeds to the CS/CCA procedure.

Sunday, August 16, 2009

Nonstandard Devices

Although the previous section described how 802.11-standards–based devices access the wireless medium, this section discusses devices that fall outside of the 802.11 standard. These devices use the 802.11 technology in a way that violates or extends an area of the standard and that might prove useful in your network. The specific devices under consideration are:
  • Repeater APs
  • Universal clients (workgroup bridges)
  • Wireless bridges
Although each of these devices provides useful networking tools, you should remember that they are not currently defined in the 802.11 standard, and there are no interoperability guarantees because different vendors may define different mechanisms for implementing these tools. For the reliability of your network, should you choose to use these, you should ensure that they are only interfacing to devices from the same vendor or devices for which the vendor ensures interoperability.


Repeater APs

You might find yourself in situations where it is not easy or convenient to connect an AP to the wired infrastructure or where an obstruction makes it difficult for an AP on your wired network to directly associate clients in an area of your deployment. In such a scenario, you can employ a repeater AP. Figure 2-18 shows this scenario, where Elaine is not directly visible to AP2 but she can see AP3, which is not connected to the wired network but can see AP2.

Much like a wired repeater, what a wireless repeater does is merely retransmit all the packets that it receives on its wireless interface. This retransmission happens on the same channel upon which the packet was received. The repeater AP has the effect of extending the BSS and also the collision domain. Although it can be an effective tool, you must take care when employing it; the overlapping of the broadcast domains can effectively cut your throughput in half because the originating AP also hears the retransmit.


Wireless Bridges

If you extend the concept of a workgroup bridge even further to the point where you are connecting two or more wired networks, you arrive at the concept of wireless bridges. Similar to wired bridges, wireless bridges connect networks. You might bridge wirelessly because you need to connect networks that are inherently mobile. Alternatively, the networks to be connected might not be co-located, in which case wireless bridging provides a method for connecting these networks. The main distinction between bridges and workgroup bridges is that the latter are only wirelessly enabling a small network in an office environment, whereas the former can connect larger networks often separated by distances much greater than what is found in the WLAN environment. In fact, many vendors offer products that provide ranges which far exceed the definitions and limitations of 802.11. Figure 2-20 shows a wireless bridging example.

As shown in the figure, one of the bridges assumes the role of the AP in a WLAN network, and the other bridges act as clients. Although the basic 802.11 MAC and PHY sublayer technologies are utilized in wireless bridging, individual vendors have their own proprietary methods for the encapsulation of wired network traffic and for extending the range from a MAC and PHY sublayer perspective. For this reason, once again you should ensure that your wireless bridges are certified to interoperate.

Sunday, August 9, 2009

802.11 Medium Access Mechanisms

802.11-based WLANs use a similar mechanism known as carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CA is a listen before talk (LBT) mechanism. The transmitting station senses the medium for a carrier signal and waits until the carrier channel is available before transmitting.

Wired Ethernet is able to sense a collision on the medium. Two stations transmitting at the same time increase the signal level on the wire, indicating to the transmitting stations that a collision has occurred. 802.11 wireless stations do not have this capability. The 802.11 access mechanism must make every effort to avoid collisions altogether.

CSMA/CA

CSMA/CA is more ordered than CSMA/CD. To use the same telephone conference call analogy, you make some changes to the scenario:
  • Before a participant speaks, she must indicate how long she plans to speak. This indication gives any potential speakers an idea of how long to wait before they have an opportunity to speak.
  • Participants cannot speak until the announced duration of a previous speaker has elapsed.
  • Participants are unaware whether their voices are heard while they are speaking, unless they receive confirmation of their speeches when they are done.
  • If two participants happen to start speaking at the same time, they are unaware they are speaking over each other. The speakers determine they are speaking over each other because they do not receive confirmation that their voices were heard.
  • The participants wait a random amount of time and attempt to speak again, should they not receive confirmation of their speeches.
The 802.11 implementation of CSMA/CA is manifested in the distributed coordination function (DCF). To describe how CSMA/CD works, it is important to describe some key 802.11 CSMA/CA components first:
  • Carrier sense
  • DCF
  • Acknowledgment frames
  • Request to Send/Clear to Send (RTS/CTS) medium reservation
In addition, two other mechanisms pertain to 802.11 medium access but are not directly tied to CSMA/CA:
  • Frame fragmentation
  • Point coordination function (PCF)

Carrier Sense

A station that wants to transmit on the wireless medium must sense whether the medium is in use. If the medium is in use, the station must defer frame transmission until the medium is not in use. The station determines the state of the medium using two methods:
  • Check the Layer 1 physical layer (PHY) to see whether a carrier is present.
  • Use the virtual carrier-sense function, the network allocation vector (NAV).
The station can check the PHY and detect that the medium is available. But in some instances, the medium might still be reserved by another station via the NAV. The NAV is a timer that is updated by data frames transmitted on the medium. For example, in an infrastructure BSS, suppose Martha is sending a frame to George (see Figure 2-4). Because the wireless medium is a broadcast-based shared medium, Vivian also receives the frame. The 802.11 frames contain a duration field. This duration value is large enough to cover the transmission of the frame and the expected acknowledgment. Vivian updates her NAV with the duration value and does not attempt transmission until the NAV has decremented to 0.


Note that stations only update the NAV when the duration field value received is greater than what is currently stored in their NAV. Using the same example, if Vivian has a NAV of 10 milliseconds, she does not update her NAV if she receives a frame with a duration of 5 milliseconds. She updates her NAV if she receives a frame with a duration of 20 milliseconds.

DCF

The IEEE-mandated access mechanism for 802.11 networks is DCF, a medium access mechanism based on the CSMA/CA access method. To describe DCF operation, we first define some concepts. Figure 2-5 shows a time line for the scenario in Figure 2-4.

In DCF operation, a station wanting to transmit a frame must wait a specific amount of time after the medium becomes available. This time value is known as the DCF interframe space (DIFS). Once the DIFS interval elapses, the medium becomes available for station access
contention.

In Figure 2-5, Vivian and George might want to transmit frames when Martha's transmission is complete. Both stations should have the same NAV values, and both will physically sense when the medium is idle. There is a high probability that both stations will attempt to transmit when the medium becomes idle, causing a collision. To avoid this situation, DCF uses a random backoff timer.

The random backoff algorithm randomly selects a value from 0 to the contention window (CW) value. The default CW values vary by vendor and are value-stored in the station NIC. The range of values for random backoff start at 0 slot times and increment up to the maximum value, which is a moving ceiling starting at CWmin and stopping at a maximum value known as CWmax. For the sake of this example, assume that the CWmin value begins at 7 and CWmax value is 255. Figure 2-6 illustrates the CWmin and CWmax values for binary random backoff.

A station randomly selects a value between 0 and the current value of the CW. The random value is the number of 802.11 slot times the station must wait during the medium idle CW before it may transmit. A slot time is a time value derived from the PHY based on RF characteristics of the BSS .

Getting back to the example, Vivian is ready to transmit. Her NAV timer has decremented to 0, and the PHY also indicates the medium is idle. Vivian selects a random backoff time between 0 and CW (in this case, CW is 7) and waits the selected number of slot times before transmitting. Figure 2-7 illustrates this process, with a random backoff value of four slot times.


The Acknowledgment Frame

A station receiving a frame acknowledges error-free receipt of the frame by sending an acknowledgment frame back to the sending station. Knowing that the receiving station has to
access the medium and transmit the acknowledgment frame, you would assume that it is possible for the acknowledgment frame to be delayed because of medium contention. The transmission of an acknowledgment frame is a special case. Acknowledgment frames are allowed to skip the random backoff process and wait a short interval after the frame has been received to transmit the acknowledgment. The short interval the receiving station waits is known as the short interframe space (SIFS) . The SIFS interval is shorter than a DIFS interval by two slot times. It guarantees the receiving station the best possible chance of transmitting on the medium before another station does.

Referring to Vivian's transmission to George, Vivian deferred her transmission attempt for four slot times. The medium was still available, so she transmitted her frame to George, as depicted in Figure 2-9. The AP receives the frame and waits a SIFS interval before sending an acknowledgment frame.


The Hidden Node Problem and RTS/CTS

Vivian might be unable to access the medium because of another station that is within range of the AP yet out of range of her station. Figure 2-10 illustrates this situation. Vivian and George are in range of each other and in range of the AP. Yet neither of them is in range of Tony. Tony is in range of the AP and attempts to transmit on the medium as well. The situation is known as the hidden node problem because Tony is hidden to Vivian and George.

802.11 Frame Fragmentation

Frame fragmentation is a MAC layer function that is designed to increase the reliability of frame transmission across the wireless medium. The premise behind fragmentation is that a frame is broken up into smaller fragments, and each fragment is transmitted individually, as depicted in Figure 2-13. The assumption is that there is a higher probability of successfully transmitting a smaller frame fragment across the hostile wireless medium. Each frame fragment is individually acknowledged; therefore, if any fragment of the frame encounters any errors or a collision, only the fragment needs to be retransmitted, not the entire frame, increasing the effective throughput of the medium.

Tuesday, July 28, 2009

WLAN Topologies

802.11 networks are flexible by design. You have the option of deploying three types of WLAN topologies:
  • Independent basic service sets (IBSSs)
  • Basic service sets (BSSs)
  • Extended service sets (ESSs)
A service set is a logical grouping of devices. WLANs provide network access by broadcasting a signal across a wireless radio frequency (RF) carrier. A receiving station can be within range of a number of transmitters. The transmitter prefaces its transmissions with a service set identifier (SSID). The receiver uses the SSID to filter through the received signals and locate the one it wants to listen to.

IBSS
An IBSS consists of a group of 802.11 stations communicating directly with one another. An IBSS is also referred to as an ad-hoc network because it is essentially a simple peer-to-peer WLAN. Figure 2-1 illustrates how two stations equipped with 802.11 network interface cards (NICs) can form an IBSS and communicate directly with one another.

BSS
A BSS is a group of 802.11 stations communicating with one another. A BSS requires a specialized station known as an access point (AP). The AP is the central point of communications for all stations in a BSS. The client stations do not communicate directly other client stations. Rather, they communicate with the AP, and the AP forwards the frames to the destination stations. The AP might be equipped with an uplink port that connects the BSS to a wired network (for example, an Ethernet uplink). Because of this requirement, a BSS is also referred to as an infrastructure BSS. Figure 2-2 illustrates a typical infrastructure BSS.

ESS
Multiple infrastructure BSSs can be connected via their uplink interfaces. In the world of 802.11, the uplink interface connects the BSS to the distribution system (DS). The collection of BSSs interconnected via the DS is known as the ESS. Figure 2-3 shows a practical implementation of an ESS. The uplink to the DS does not have to be via a wired connection. The 802.11 specification leaves the potential for this link to be wireless. For the most part, DS uplinks are wired Ethernet.

Tuesday, July 14, 2009

Ethernet Technologies

802.3 Ethernet and the OSI Model

Diving deep in the OSI model is not the goal of this chapter, but you do need to focus on
Layer 2, the data link layer, to put Ethernet technologies into perspective. The data link layer has two sublayers, as illustrated in Figure 1-1:
  • Data link sublayer— Also known as the MAC layer, this sublayer focuses on topology specific implementations. For example, 802.5 Token Ring networks have a different MAC than 802.3 Ethernet networks.
  • Logical link (LLC) sublayer— Standard across all 802-based networks, this sublayer provides a simple frame protocol that provides connectionless frame delivery. There is no mechanism to notify the sender that the frame was or was not delivered.

The focus of the subsequent sections surrounds the MAC layer. This layer is unique to 802.3 networks and as such provides a reference point as you progress through the chapters on the wireless MAC.


The 802.3 Frame Format

Figure 1-2 depicts an Ethernet frame.

As Figure 1-2 illustrates, the Ethernet frame consists of the following fields:
  • Preamble— The preamble is a set of 7 octets (an octet is a set of 8 bits) totaling 56 bits of alternating 1s and 0s. Each octet has the following bit pattern: 10101010. The preamble indicates to the receiving station that a frame is being transmitted on the medium. It is important to note that Ethernet topologies subsequent to 10 Mbps Ethernet still include the preamble but do not require one.
  • Start of frame delimiter (SFD)— The SFD is an 8-bit field that has a bit pattern similar to the preamble, but the last 2 bits are both 1s (10101011). This pattern indicates to the receiving station that the frame's contents follow this field.
  • Destination MAC address— The destination address field is a 48-bit value that indicates the destination station address of the frame.
  • Source address— The source address field is a 48-bit value that indicates the station address of the sending station.

Type/length value (TLV)— The TLV field uses 16 bits to indicate what type of higher layer protocol is encapsulated in the data or payload field. The value contained in this field is also referred to as the Ethertype value. Table 1-1 lists some common Ethertype values.

Table 1-1. Some Common Ethernet Ethertypes


  • Payload or data— The data or payload field carries upper-layer packets and must be a minimum of 46 bytes and a maximum of 1500 bytes in length. The minimum data or payload size is required to allow all stations a chance to receive the frame. This topic is discussed further in the section, "Ethernet Network Diameter and Ethernet Slot Time." If the data or payload is less than 46 bytes, the sending station pads the payload so it meets the minimum 46 bytes.
  • Frame check sequence (FCS)— The FCS field contains a cyclic redundancy check (CRC) value calculated against the bit pattern of the frame. When the receiving station receives the frame, it calculates a CRC and compares it to what is in the FCS field. If the values match, the frame is considered error free (see Figure 1-3).
Ethernet Addressing

Ethernet addresses are 48-bit values that uniquely identify Ethernet stations on a LAN. Ethernet addresses are in part issued by a global authority, the IEEE, and in part by device vendors. The IEEE assigns unique 24-bit organizational unique identifiers (OUIs) to vendors. The OUI is the first 24 bits of the Ethernet address. The vendors themselves assign the remaining 24 bits. This process ensures that every Ethernet address is unique, and any station can connect to any network in world and be uniquely identified. Because this addressing describes a physical interface, it is also referred to as MAC addressing. For the most part, MAC addresses are expressed in hexadecimal form, with each byte separated by a dash or colon, or with every 2 bytes delimited with a period. For example, the following is an Ethernet address from a Cisco router:

00-03-6b-48-e9-20

You can also represent this value as 00:03:6b:48:e9:20 or 0003.6b48.e920

The IEEE has assigned the first 24 bits, 00-03-6b, to Cisco. The remaining 24 bits, 48-e9-20, have been assigned by Cisco to the device. The OUI of 00-03-6b allows the vendor to assign a range of addresses starting from 00-03-6b-00-00-00 to 00-03-6b-ff-ff-ff. This provides the vendor a total of 224 or 16,777,216 possible addresses.


CSMA/CD Architecture

The Ethernet networking standard is based on the CSMA/CD architecture. CSMA/CD is a halfduplex architecture, meaning only one station can transmit at a time. You can compare the CSMA/CD architecture to people communicating in a conference-call meeting:
  • Each participant doesn't know when the other person is going to speak.
  • A participant wanting to say something has to wait for the phone line to become quiet before she can start speaking.
  • When the phone line becomes quiet, it is possible for two or more participants to start speaking at the same time.
  • If two people speak at the same time, it is difficult for listeners to understand, so the speakers must stop talking and again wait for the line to become quiet before trying to speak again.
Consider Figure 1-4 where two stations are at extreme ends of the broadcast domain:
  • Station transmits a frame and that is smaller than 512 bits.
  • At the same moment, Station B begins transmitting a frame.
  • Station A transmits the last bit of its frame.
  • Station A does not detect a collision during transmission and discards the frame from its transmit buffer.
  • Station A assumes that the destination station of its frame received the frame.
  • Station A's frame collides with Station B's frame.
  • Station A has already discarded the frame from its transmit buffer, so Station A has no frame to retransmit.

Unicast, Multicast, and Broadcast Frames

A station can address its frames for transmission using one of three methods:
  • Broadcast addressing— The station sends the frame to all stations in the broadcast domain.
  • Group or multicast addressing— The station addresses its frames to a subset of all stations in the broadcast domain that belong a predefined group.
  • Unicast addressing— The station addresses its frames to a specific station.
Figure 1-5 depicts these addressing types. Ethernet networks use all three methods. No one method is a panacea. Each method has pros and cons for its use.

802.3u Fast Ethernet

As Ethernet became more accepted as a standard for data networking, users began demanding more bandwidth. To calm the screaming masses, the IEEE announced 802.3u, the standard for 100 Mbps Ethernet in 1995. Although there were a number of 100 Mbps solutions for Ethernet, two have become the most common options: 100BASE-TX and 100BASE-FX (both are collectively referred to as 100BASE-X). 100BASE-X technology is based on the non-IEEE standard FDDI (ANSI X3T9.5). FDDI was the de facto 100 Mbps standard before Fast Ethernet and had a number of advantages to shared Ethernet.

100BASE-TX applies the 100BASE-X specification to Category 5 twisted-pair cabling. 100BASE-TX is similar to 10BASE-T in many ways, but unlike 10BASE-T, 100BASE-TX requires Category 5 cabling. 100BASE-TX performs a great deal of high-frequency signaling that requires a higher grade of cable than the Category 3 required for 10BASE-T. 100BASETX also has the same distance limitation of roughly 100 m that 10BASE-T has, meaning the same cabling infrastructure can be leveraged (assuming it is Category 5 or better).

The network diameter and Ethernet slot time for Fast Ethernet networks change from Ethernet to 100BASE-X networks. The Ethernet slot time defines the maximum network diameter by stipulating that the diameter should not exceed the distance a 512-bit frame can travel before the transmitting station is done sending that frame. Fast Ethernet systems maintain the use of the 512-bit frame size to maintain backward compatibility with legacy Ethernet systems.

For Ethernet networks, the maximum diameter is 2800 m. With 100BASE-TX, the transmit operations occur 10 times faster than the transmit operations of Ethernet stations. Accordingly, for a sending station to detect a collision after sending the 512-bit frame, the frame can only travel one-tenth the distance. This limit reduces the maximum network diameter from 2800 m to roughly 200 m. The loss of distance does not pose a real issue because most Fast Ethernet deployments use 100BASE-TX, which has a maximum distance of 100 m anyway.

100BASE-FX is a variant of 100BASE-X that uses multimode fiber as the medium to transmit data. The network interface card (NIC) converts electric signals into pulses of light that are sent over the fiber medium to the receiving NIC. The receiving NIC then translates the light pulses back into electrical signals that the receiving station can process.

100BASE-FX uses the same encoding mechanism as 100BASE-TX, but that is where the similarities end between 100BASE-TX and 100BASE-FX. Because 100BASE-FX uses light to carry data through the medium, there is no electromagnetic interference to be concerned with. This setup allows for a more efficient signaling scheme. The maximum network diameter for 100BASE-FX is roughly 400 m in half-duplex mode. 100BASE-FX can also operate in full-duplex mode. (Duplex modes are discussed next.) Full-duplex operation essentially eliminates the issues surrounding collisions, so 100BASE-FX can safely extend to distances beyond 400 m. In fact, using standard 62.5/125 micron multimode fiber, 100BASEFX can extend to 2 km while in full-duplex mode. If connectivity requirements dictate distances beyond 2 km, single-mode transceivers are available that allow 100BASE-FX to operate over single-mode fiber to distances up to 40 km. The cost of single-mode transceivers and single-mode fiber is an order of magnitude more expensive than its multimode brethren, but the solution exists if needed.


Full-Duplex Operation

CSMA/CD is the methodology that half-duplex Ethernet and Fast Ethernet is based on. As described earlier, CSMA/CD is like a telephone conference call. Each participant must wait until the medium is available before he can speak. In 1995, the IEEE ratified 802.3x, which specifies a new methodology for transmission in Ethernet networks known as full-duplex operation. Full-duplex operation allows a station to send and receive frames simultaneously, allowing greater use of the medium and higher overall throughput (see Figure 1-8). Fullduplex operation significantly changes the requirements placed on end stations, however.


Full-duplex operation works only in a point-to-point environment. There can be only one other device in the collision domain. Stations connected to hubs, repeaters, and the like are unable to operate in full-duplex mode. Stations connected back-to-back or connected to Layer 2 switches (that support full-duplex mode) are able to use full-duplex mode.

The capability to transmit and receive at the same rate allows stations to better utilize the network medium. The bandwidth available to the station is theoretically doubled because the station has full access to the medium in the send direction and the receive direction. In the case of 100BASE-X, this access gives each station up to 200 Mbps of maximum bandwidth. For end stations, such as PCs, the truth is that few stations transmit and receive at the same time. Stations such as servers and networking infrastructure such as routers and switches can take advantage of full-duplex mode in a manner that end stations cannot. The devices aggregate sessions and connections from the edge of the network to the core and back. They send and receive traffic distributed in both the send and receive directions, so these links are able to really take advantage of the extra bandwidth that full-duplex operation provides.

Full-duplex operation allows Ethernet topologies to break free from the distance limitations that half-duplex operations impose on them. Ironically, only fiber-based interfaces can take advantage of additional distances (as 100BASE-FX does) because twisted-pair deployments are distance-limited by the physical medium itself and not the network diameter imposed by Ethernet or Fast Ethernet time slots.

Gigabit Ethernet

The jump from Ethernet to Fast Ethernet gave users 10 times more available bandwidth. Gigabit Ethernet, with a data rate of 1000 Mbps, offers the same proportioned jump for Fast Ethernet users, but the difference is 900 Mbps more available bandwidth as opposed to 90 Mbps. This substantial increase in bandwidth places a strain on developers who must solve network diameter issues and cabling issues. Gigabit Ethernet has two main areas:
  • 1000BASE-T— Like its 10BASE-T and 100BASE-TX brethren, 1000BASE-T supports UTP cabling at a distance of up to 100 m.
  • 1000BASE-X— 1000BASE-X has three subcategories:
1000BASE-SX— A fiber-optic–based medium designed for use over standard multimode fiber for short-haul runs up to 200 m.
1000BASE-LX— A fiber-optic–based medium designed for use over singlemode fiber for long runs of up to 10 km, although it is possible to use modeconditioned multimode fiber in some cases.
1000BASE-CX— A shielded copper medium designed for short patches between devices. 1000BASE-CX is limited to distances of 25 m.


802.3ab 1000BASE-T

The development of the 1000BASE-T standard stemmed from the efforts of Fast Ethernet development. The search for the ideal Fast Ethernet copper solution drove the adoption of 100BASE-TX. Although not well known, there were two other standards: 100BASE-T4 and
100BASE-T2. 100BASE-T4 was not a popular solution because it required the use of all four pairs of Category 3 or 5 cabling. Some installations wired only two-pair Category 3 or 5 cabling in accordance with the requirements of 10BASE-T. 100BASE-T4 also missed the mark by not supporting full-duplex operation.

100BASE-T2 was a more far-reaching specification, enabling 100 Mbps operation over Category 3 cabling using only two pairs. The problem is that no vendor ever implemented the standard. When the time came to develop the gigabit solution for the Ethernet standard, developers took the best of all the 100 Mbps standards and incorporated them into the 1000BASE-T specification.


802.3z 1000BASE-X

802.3z was ratified in 1999 and included in the 802.3 standard. 1000BASE-X is the specification for Gigabit Ethernet over a fiber-optic medium. The underlying technology itself is not new because it is based on the ANSI Fibre Channel standard (ANSI X3T11). 1000BASEX comes in three media types: 1000BASE-SX, 1000BASE-LX, and 1000BASE-CX. 1000BASESX is the most common and least expensive media, using standard multimode fiber. The low cost is not without shortcomings; 1000BASE-SX has a maximum distance of 220 m (compared with full-duplex 100BASE-FX at 2 km). 1000BASE-LX generally utilizes singlemode fiber and can span distances up to 5 km.

1000BASE-CX is the oddball of the three media types. It is a copper-based solution that requires precrimped shielded twisted-pair cabling. The connector is not the familiar RJ-45 of 10/100/1000BASE-T. Instead, you use either a DB-9 or HSSDC connector to terminate the two pairs of wire. 1000BASE-CX can span lengths of up to 25 m, relegating it to wiring closet patches. 1000BASE-CX is not all that common because 1000BASE-T provides the same function for a fraction of the price, and four times the cable length, using standard four-pair, Category 5 cabling.

Ethernet has evolved to support new requirements that users and network administrators demand. It continues to evolve beyond Gigabit Ethernet with its next iteration, 10 Gigabit Ethernet, on the horizon. Table 1-3 gives a summary of the Ethernet family of topologies and their media types. Each topology has a place in networking today, determined by requirements such as cost, required data rate, distance, and existing cable plant. Wired Ethernet shows that backward compatibility is what allows new topologies to prosper, develop, and become accepted standards.