Frame Composition
Figure 4.1 illustrates the general frame composition of Ethernet and IEEE
802.3 frames. You will note that they differ slightly. An Ethernet frame contains
an eight-byte preamble, while the IEEE 802.3 frame contains a seven-byte
preamble followed by a one-byte start-of-frame delimiter field. A second difference
between the composition of Ethernet and IEEE 802.3 frames concerns
the two-byte Ethernet type field. That field is used by Ethernet to specify
the protocol carried in the frame, enabling several protocols to be carried
independently of one another. Under the IEEE 802.3 frame format, the type
field was replaced by a two-byte length field, which specifies the number of
bytes that follow that field as data.
The differences between Ethernet and IEEE 802.3 frames, while minor, make
the two incompatible with one another. This means that your network must
contain either all Ethernet-compatible NICs or all IEEE 802.3–compatible
NICs. Fortunately, the fact that the IEEE 802.3 frame format represents a
standard means that almost all vendors now market 802.3-compliant hardware
and software. Although a few vendors continue to manufacture Ethernet or
dual functioning Ethernet/IEEE 802.3 hardware, such products are primarily
used to provide organizations with the ability to expand previously developed
networks without requiring the wholesale replacement of NICs. Although the
IEEE 802.3 frame does not directly support a type field within the frame, as we
will note in Section 4 in this chapter, the IEEE defined a special type of frame
to obtain compatibility with Ethernet LANs. That frame is referred to as an
Ethernet Subnetwork Access Protocol (Ethernet-SNAP) frame, which enables
a type subfield to be included in the data field. While the IEEE 802.3 standard
has essentially replaced Ethernet, because of their similarities and the fact
that 802.3 was based upon Ethernet, we will consider both to be Ethernet.
Now that we have an overview of the structure of Ethernet and 802.3 frames,
let’s probe more deeply and examine the composition of each frame field. We
will take advantage of the similarity between Ethernet and IEEE 802.3 frames
to examine the fields of each frame on a composite basis, noting the differences
between the two when appropriate.
Preamble Field
The preamble field consists of eight (Ethernet) or seven (IEEE 802.3) bytes of
alternating 1 and 0 bits. The purpose of this field is to announce the frame
and to enable all receivers on the network to synchronize themselves to the
incoming frame.
Start-of-Frame Delimiter Field
This field is applicable only to the IEEE 802.3 standard and can be viewed as
a continuation of the preamble. In fact, the composition of this field continues
in the same manner as the format of the preamble, with alternating 1 and
0 bits used for the first six bit positions of this one-byte field. The last two
bit positions of this field are 11—this breaks the synchronization pattern and
alerts the receiver that frame data follows.
Both the preamble field and the start-of-frame delimiter field are removed
by the controller when it places a received frame in its buffer. Similarly, when
a controller transmits a frame, it prefixes the frame with those two fields (if it
is transmitting an IEEE 802.3 frame) or a preamble field (if it is transmitting a
true Ethernet frame).
Destination Address Field
The destination address identifies the recipient of the frame. Although this
may appear to be a simple field, in reality its length can vary between IEEE
802.3 and Ethernet frames. In addition, each field can consist of two or
more subfields, whose settings govern such network operations as the type of
addressing used on the LAN, and whether the frame is addressed to a specific
station or more than one station. To obtain an appreciation for the use of this
field, let’s examine how this field is used under the IEEE 802.3 standard as
one of the two field formats applicable to Ethernet.
Figure 4.2 illustrates the composition of the source and destination address
fields. As indicated, the two-byte source and destination address fields are
applicable only to IEEE 802.3 networks, while the six-byte source and destination
address fields are applicable to both Ethernet and IEEE 802.3 networks.
A user can select either a two- or six-byte destination address field; however,
with IEEE 802.3 equipment, all stations on the LAN must use the same
addressing structure. Today, almost all 802.3 networks use six-byte addressing,
because the inclusion of a two-byte field option was designed primarily
to accommodate early LANs that use 16-bit address fields.
Both destination and source addresses are normally displayed by network
monitors in hexadecimal, with the first three bytes separated from the last
three by a colon (:) when six-byte addressing is used. For example, the source
address 02608C876543 would be displayed as 02608C:876543. As we will
shortly note, the first three bytes identify the manufacturer of the adapter
card, while the following three bytes identify a specific adapter manufactured
by the vendor identified by the first three bytes or six hex digits.
I/G Subfield
The one-bit I/G subfield is set to a 0 to indicate that the frame is destined to
an individual station, or 1 to indicate that the frame is addressed to more than
one station—a group address. One special example of a group address is the
assignment of all 1s to the address field. Hex ‘‘FFFFFFFFFFFF’’ is recognized
as a broadcast address, and each station on the network will receive and
accept frames with that destination address.
An example of the use of a broadcast destination address is the service
advertising packet (SAP) transmitted every 60 seconds by NetWare servers.
The SAP is used to inform other servers and workstations on the network of
the presence of that server. Because the SAP uses a destination address of
FF-FF-FF-FF-FF-FF, it is recognized by every node on the network.
When a destination address specifies a single station, the address is referred
to as a unicast address. A group address that defines multiple stations is
known as a multicast address, while a group address that specifies all stations
on the network is, as previously mentioned, referred to as a broadcast address.
U/L Subfield
The U/L subfield is applicable only to the six-byte destination address field.
The setting of this field’s bit position indicates whether the destination address
is an address that was assigned by the IEEE (universally administered) or
assigned by the organization via software (locally administered).
Universal versus Locally Administered Addressing
Each Ethernet NIC contains a unique address burned into its read-onlymemory
(ROM) at the time of manufacture. To ensure that this universally administered
address is not duplicated, the IEEE assigns blocks of addresses to each
manufacturer. These addresses normally include a three-byte prefix, which
identifies the manufacturer and is assigned by the IEEE, and a three-byte
suffix, which is assigned by the adapter manufacturer to its NIC. For example,
the prefix 02608C identifies an NIC manufactured by 3Com, while a prefix of
hex 08002 identifies an NIC manufactured by Digital Equipment Company,
which was acquired by compaq computer.
Although the use of universally administered addressing eliminates the
potential for duplicate network addresses, it does not provide the flexibility
obtainable from locally administered addressing. For example, under locally
administered addressing, you can configure mainframe software to work with
a predefined group of addresses via a gateway PC. Then, as you add new
stations to your LAN, you simply use your installation program to assign a
locally administered address to the NIC instead of using its universally administered
address. As long as your mainframe computer has a pool of locally
administered addresses that includes your recent assignment, you do not have
to modify your mainframe communications software configuration. Because
the modification of mainframe communications software typically requires
recompiling and reloading, the attached network must become inoperative for
a short period of time. Because a large mainframe may service hundreds to
thousands of users, such changes are normally performed late in the evening or
on a weekend. Thus, the changes required for locally administered addressing
are more responsive to users accessing certain types of mainframe computers
than those required for universally administered addressing.
Source Address Field
The source address field identifies the station that transmitted the frame. Like
the destination address field, the source address can be either two or six bytes
in length.
The two-byte source address is supported only under the IEEE 802.3 standard
and requires the use of a two-byte destination address; all stations on
the network must use two-byte addressing fields. The six-byte source address
field is supported by both Ethernet and the IEEE 802.3 standard. When a
six-byte address is used, the first three bytes represent the address assigned
by the IEEE to the manufacturer for incorporation into each NIC’s ROM. The
vendor then normally assigns the last three bytes for each of its NICs.
Table 4.1 lists the NIC identifiers for 85 Ethernet card manufacturers.
Note that many organizations including Cisco Systems, 3Com, IBM, MIPS,
Ungermann-Bass, and Data General were assigned two or more blocks of
addresses by the IEEE. Also note that organizations listed in Table 4.1 range
in scope from well-known communications and computer manufacturers to
universities and even a commercial firm probably best known for its watch
commercials. The entries in Table 4.1 represent a portion of three-byte identifiers
assigned by the IEEE over the past decade and do not include identifiers
currently assigned to all vendors. For a comprehensive list of currently
assigned three-byte identifiers, readers should contact the IEEE. You can
contact the IEEE at:
IEEE Standards Department
445 Hoes Lane
P.O. Box 1331
Piscataway, NJ 08855
Telephone: +1 (732) 562-3813
Fax: +1 (732) 562-1571
Many software- and hardware-based network analyzers include the capability
to identify each station on a LAN, count the number of frames transmitted
by the station and destined to the station, as well as identify the manufacturer
of the NIC used in the station. Concerning the latter capability, this is
accomplished by the network analyzer containing a table of three-byte identifiers
assigned by the IEEE to each NIC manufacturer, along with the name of
the manufacturer. Then the analyzer compares the three-byte identifier read
from frames flowing on the network and compares each identifier with the
identifiers stored in its identifier table. By providing information concerning
network statistics, network errors, and the vendor identifier for the NIC in
each station, you may be able to isolate problems faster or better consider
future decisions concerning the acquisition of additional NICs.
An example of the use of NIC manufacturer IDs can be obtained by examining
two monitoring screen displays of the Triticom EtherVision network
monitoring and analysis program. Figure 4.3 illustrates the monitoring screen
during the program’s autodiscovery process. During this process the program
reads the source address of each frame transmitted on the segment that the
computer executing the program is connected to. Although obscured by the
highlighted bar, the first three bytes of the adapter address first discovered
is 00-60-8C, which represents a block of addresses assigned by the IEEE to 3
Com Corporation. If you glance at the first column in Figure 4.3, you will note
that the second row, fourth row, ninth row, and a few additional rows also
have NIC addresses that commence with hex 00-60-8C. By pressing the F2 key
the program will display the manufacturer of each NIC encountered and for
which statistics are being accumulated. This is indicated in Figure 4.4, which
shows the first three bytes of each address replaced by the vendor assigned
to the appropriate manufacturer ID. Thus, rows 1, 4, 9, and a few other rows
commence with ‘‘3Com’’ to indicate the manufacturer of the NIC.
Organizations can request the assignment of a vendor code by contacting
the IEEE Registration Authority at the previously listed address for the IEEE
provided in this section. A full list of assigned vendor codes is obtainable by
FTP at ftp.ieee.org as the file ieee/info/info.stds.oui. Readers should note that
the list is limited to those companies that agreed to make their vendor code
assignment(s) public.
The Triticom EtherVision source address monitoring feature discovers
the hardware address of each NIC. At the time this screen was captured
16 stations were identified.
Type Field
The two-byte type field is applicable only to the Ethernet frame. This field
identifies the higher-level protocol contained in the data field. Thus, this field
tells the receiving device how to interpret the data field.
Under Ethernet, multiple protocols can exist on the LAN at the same time.
Xerox served as the custodian of Ethernet address ranges licensed to NIC
manufacturers and defined the protocols supported by the assignment of type
field values.
Table 4.2 lists 31 of the more common Ethernet type field assignments.
To illustrate the ability of Ethernet to transport multiple protocols, assume
a common LAN was used to connect stations to both UNIX and NetWare
servers. Frames with the hex value 0800 in the type field would identify the
IP protocol, while frames with the hex value 8137 in the type field would
identify the transport of IPX and SPX protocols. Thus, the placement of an
appropriate hex value in the Ethernet type field provides a mechanism to
support the transport of multiple protocols on the local area network.
Under the IEEE 802.3 standard, the type field was replaced by a length field,
which precludes compatibility between pure Ethernet and 802.3 frames.
Length Field
The two-byte length field, applicable to the IEEE 802.3 standard, defines the
number of bytes contained in the data field. Under both Ethernet and IEEE
802.3 standards, the minimum size frame must be 64 bytes in length from
preamble through FCS fields. This minimum size frame ensures that there
Based on the minimum frame length of 64 bytes and the possibility of using
two-byte addressing fields, this means that each data field must be a minimum
of 46 bytes in length. The only exception to the preceding involves Gigabit
Ethernet. At a 1000-Mbps operating rate the original 802.3 standard would
not provide a frame duration long enough to permit a 100-meter cable run
over copper media. This is because at a 1000-Mbps data rate there is a high
probability that a station could be in the middle of transmitting a frame before
it becomes aware of any collision that might have occurred at the other end
of the segment. Recognizing this problem resulted in the development of a
carrier extension, which extends the minimum Ethernet frame to 512 bytes.
The carrier extension is discussed in detail in Section 4.6 when we turn our
attention to the Gigabit Ethernet carrier extension.
For all versions of Ethernet except Gigabit Ethernet, if data being transported
is less than 46 bytes, the data field is padded to obtain 46 bytes. However, the
number of PAD characters is not included in the length field value. NICs that
support both Ethernet and IEEE 802.3 frame formats use the value in this field
to distinguish between the two frames. That is, because the maximum length
of the data field is 1,500 bytes, a value that exceeds hex 05DC indicates that
instead of a length field (IEEE 802.3), the field is a type field (Ethernet).
Data Field
As previously discussed, the data field must be a minimum of 46 bytes in
length to ensure that the frame is at least 64 bytes in length. This means that
the transmission of 1 byte of information must be carried within a 46-byte
data field; if the information to be placed in the field is less than 46 bytes, the
remainder of the field must be padded. Although some publications subdivide
the data field to include a PAD subfield, the latter actually represents optional
fill characters that are added to the information in the data field to ensure a
length of 46 bytes. The maximum length of the data field is 1500 bytes.
Frame Check Sequence Field
The frame check sequence field, applicable to both Ethernet and the IEEE
802.3 standard, provides a mechanism for error detection. Each transmitter
computes a cyclic redundancy check (CRC) that covers both address fields, the
type/length field, and the data field. The transmitter then places the computed
CRC in the four-byte FCS field.
The CRC treats the previously mentioned fields as one long binary number.
The n bits to be covered by the CRC are considered to represent the coefficients
of a polynomial M(X) of degree n − 1. Here, the first bit in the destination
address field corresponds to the Xn−1 term, while the last bit in the data field
corresponds to the X0 term. Next, M(X) is multiplied by X32, and the result of
that multiplication process is divided by the following polynomial:
G(X)=X32+X26+X23+X22+X16+X12+X11+X10+X8+X7+X5+X4+X2+X+1
Note that the term Xn represents the setting of a bit to a 1 in position n. Thus,
part of the generating polynomial X5 + X4 + X2 + X1 represents the binary
value 11011.
This division produces a quotient and remainder. The quotient is discarded,
and the remainder becomes the CRC value placed in the four-byte FCS field.
This 32-bit CRC reduces the probability of an undetected error to 1 bit in every
4.3 billion, or approximately 1 bit in 232 − 1 bits.
Once a frame reaches its destination, the receiver uses the same polynomial
to perform the same operation upon the received data. If the CRC computed
by the receiver matches the CRC in the FCS field, the frame is accepted.
Otherwise, the receiver discards the received frame, as it is considered to have
one or more bits in error. The receiver will also consider a received frame to
be invalid and discard it under two additional conditions. Those conditions
occur when the frame does not contain an integral number of bytes, or when
the length of the data field does not match the value contained in the length
field. The latter condition obviously is only applicable to the 802.3 standard,
because an Ethernet frame uses a type field instead of a length field.
Interframe Gap
Under the 10-Mbps versions of the CSMA/CD protocol a 9.6 microsecond
(μs) quiet time occurs between transmitted frames. This quiet time, which
is referred to as an interframe gap, permits clocking circuitry used within
repeaters and workstations and hub ports to be resynchronized to the known
local clock. Under Fast Ethernet the interframe gap is 0.96 ms, while under
Gigabit Ethernet the gap is reduced to 0.096 ms.
4.2 Media Access Control
In the first section in this chapter, we examined the frame format by which
data is transported on an Ethernet network. Under the IEEE 802 series of
10-Mbps operating standards, the data link layer of the OSI Reference Model
is subdivided into two sublayers—logical link control (LLC) and medium
access control (MAC). The frame formats examined in Section 4.1 represent
the manner in which LLC information is transported. Directly under the LLC
sublayer is the MAC sublayer. The MAC sublayer, which is the focus of this
section, is responsible for checking the channel and transmitting data if the
channel is idle, checking for the occurrence of a collision, and taking a series
of predefined steps if a collision is detected. Thus, this layer provides the
required logic to control the network.
Figure 4.5 illustrates the relationship between the physical and LLC layers
with respect to the MAC layer. The MAC layer is an interface between user
data and the physical placement and retrieval of data on the network. To better
understand the functions performed by the MAC layer, let us examine the
four major functions performed by that layer—transmitting data operations,
transmitting medium access management, receiving data operations, and
receiving medium access management. Each of those four functions can be
viewed as a functional area, because a group of activities is associated with
Medium access control. The medium access control (MAC) layer
can be considered an interface between user data and the physical placement
and retrieval of data on the network.
each area. Table 4.3 lists the four MAC functional areas and the activities
associated with each area. Although the transmission and reception of data
operations activities are self-explanatory, the transmission and reception of
media access management require some elaboration. Therefore, let’s focus our
attention on the activities associated with each of those functional areas.
Transmit Media Access Management
CSMA/CD can be described as a listen-before-acting access method. Thus,
the first function associated with transmit media access management is to
find out whether any data is already being transmitted on the network and, if
so, to defer transmission. During the listening process, each station attempts
to sense the carrier signal of another station, hence the prefix carrier sense
(CS) for this access method. Although broadband networks use RF modems
that generate a carrier signal, a baseband network has no carrier signal in
the conventional sense of a carrier as a periodic waveform altered to convey
information. Thus, a logical question you may have is how the MAC sublayer
on a baseband network can sense a carrier signal if there is no carrier. The
answer to this question lies in the use of a digital signaling method, known as
Manchester encoding on 10-Mbps Ethernet LANs, that a station can monitor
to note whether another station is transmitting. Although NRZI encoding is
used on broadband networks, the actual data is modulated after it is encoded.
Thus, the presence or absence of a carrier is directly indicated by the presence
or absence of a carrier signal on a broadband network.
is sufficient transmission time to enable Ethernet NICs to detect collisions
accurately, based on the maximum Ethernet cable length specified for a
network and the time required for a frame to propagate the length of the cable.
Sunday, December 9, 2007
Ethernet Networks
Ethernet
One of the key concepts behind Ethernet—that of allocating the use of a shared
channel—can be traced to the pioneering efforts of Dr. Norman Abramson
and his colleagues at the University of Hawaii during the early 1970s. Using a
ground-based radio broadcasting system to connect different locations through
the use of a shared channel, Abramson and his colleagues developed the concept
of listening to the channel before transmission, transmitting a frame of
information, listening to the channel output to determine whether a collision
occurred, and, if it did, waiting a random period of time before retransmission.
The resulting University of Hawaii ground-based radio broadcasting system,
called ALOHA, formed the basis for the development of numerous channel
contention systems, including Ethernet. In addition, the subdivision of transmission
into frames of data was the pioneering work in the development of
packet-switching networks. Thus, Norman Abramson and his colleagues can
be considered the forefathers of two of the most important communications
technologies, contention networks and packet-switching networks.
Evolution
The actual development of Ethernet occurred at the Xerox Palo Alto Research
Center (PARC) in Palo Alto, California. A development team headed by
Dr. Robert Metcalfe had to connect over 100 computers on a 1-km cable. The
resulting system, which operated at 2.94 Mbps using the CSMA/CD access
protocol, was referred to as ‘‘Ethernet’’ in a memorandum authored by Metcalfe.
He named it after the luminiferous ether through which electromagnetic
radiation was once thought to propagate.
During its progression from a research-based network into a manufactured
product, Ethernet suffered several identity crises. During the 1970s, it endured
such temporary names as the ‘‘Alto Aloha Network’’ and the ‘‘Xerox Wire.’’
After reverting to the original name, Xerox decided, quite wisely, that the
establishment of Ethernet as an industry standard for local area networks
would be expedited by an alliance with other vendors. A resulting alliance
with Digital Equipment Corporation and Intel Corporation, which was known
as the DIX Consortium, resulted in the development of a 10-Mbps Ethernet network.
It also provided Ethernet with a significant advantage over Datapoint’s
ARCNet and Wang Laboratories’ Wangnet, proprietary local area networks
that were the main competitors to Ethernet during the 1970s.
The alliance between Digital Equipment, Intel, and Xerox resulted in the
publication of a ‘‘Blue Book Standard’’ for Ethernet Version 1. An enhancement
to that standard occurred in 1982 and is referred to as Ethernet Version 2
or Ethernet II in many technical publications. Although the DIX Consortium
submitted its Ethernet specification to the IEEE in 1980, it wasn’t until 1982
that the IEEE 802.3 CSMA/CD standard was promulgated. Because the IEEE
used Ethernet Version 2 as the basis for the 802.3 CSMA/CD standard, and
Ethernet Version 1 has been obsolete for over approximately two decades, we
will refer to Ethernet Version 2 as Ethernet in the remainder of this book.
Network Components
The 10-Mbps Ethernet network standard originally developed by Xerox,
Digital Equipment Corporation, and Intel was based on the use of five hardware
components. Those components include a coaxial cable, a cable tap,
a transceiver, a transceiver cable, and an interface board (also known as
an Ethernet controller). Figure 3.1 illustrates the relationships among Ethernet
components.
Coaxial Cable
One of the problems faced by the designers of Ethernet was the selection of an
appropriatemedium. Although twisted-pair wire is relatively inexpensive and
easy to use, the short distances between twists serve as an antenna for receiving
electromagnetic and radio frequency interference in the form of noise. Thus,
the use of twisted-pair cable restricts the network to relatively short distances.
Coaxial cable, however, has a dielectric shielding the conductor. As long
as the ends of the cable are terminated, coaxial cable can transmit over
greater distances than twisted-pair cable. Because the original development of
Ethernet was oriented toward interconnecting computers located in different
Ethernet hardware components.When thick coaxial cable is used
for the bus, an Ethernet cable connection is made with a transceiver cable and
a transceiver tapped into the cable.
buildings, the use of coaxial cable was well suited for this requirement. Thus,
the initial selection for Ethernet transmission medium was coaxial cable.
There are two types of coaxial cable that can be used to form the main
Ethernet bus. The first type of coaxial cable specified for Ethernet was a relatively
thick 50-ohm cable, which is normally colored yellow and is commonly
referred to as ‘‘thick’’ Ethernet. This cable has a marking every 2.5 meters to
indicate where a tap should occur, if one is required to connect a station to the
main cable at a particular location. These markings represent the minimum
distance one tap must be separated from another on an Ethernet network.
The outer insulation or jacket of the yellow-colored cable is constructed using
PVC. A second popular type of 50-ohm cable has a Teflon jacket and is colored
orange-brown. The Teflon jacket coax is used for plenum-required installations
in air-handling spaces, referred to as plenums, to satisfy fire regulations.
When installing a thick coaxial segment the cable should be rolled from a
common cable spool or cable spools manufactured at the same time, referred
to as a similar cable lot, to minimize irregularities between cables. Under
the Ethernet specifications when the use of cable from different lots cannot
be avoided, cable sections should be used that are either 23.4 m, 70.2 m,
or 117 m in length. Those cable lengths minimize the possibility of excessive
signal reflections occurring due to variances in the minor differences
in cable produced by different vendors or from different cable lots from the
same vendor.
A second type of coaxial cable used with Ethernet is smaller and more
flexible; however, it is capable of providing a transmission distance only onethird
of that obtainable on thick cable. This lighter and more flexible cable is
referred to as ‘‘thin’’ Ethernet and also has an impedance of 50 ohms. When
the IEEE standardized Ethernet, the thick coaxial cable–based network was
assigned the designation 10BASE-5, while the network that uses the thinner
cable was assigned the designator 10BASE-2. Later in this chapter we will
examine IEEE 802.3 networks under which 10BASE-5, 10BASE-2, and other
Ethernet network designators are defined.
Two of the major advantages of thin Ethernet over thick cable are its cost
and its use of BNC connectors. Thin Ethernet is significantly less expensive
than thick Ethernet. Thick Ethernet requires connections via taps, whereas
the use of thin Ethernet permits connections to the bus via industry standard
BNC connectors that form T-junctions.
Transceiver and Transceiver Cable
Transceiver is a shortened form of transmitter-receiver. This device contains
electronics to transmit and receive signals carried by the coaxial cable.
The transceiver contains a tap that, when pushed against the coaxial cable,
penetrates the cable and makes contact with the core of the cable. Ethernet
transceivers are used for broadband transmission on a coaxial cable and
usually include a removable tap assembly. The latter enables vendors to
manufacture transceivers that can operate on thick and thin coaxial cable,
enabling network installers to change only the tap instead of the entire device
and eliminating the necessity to purchase multiple types of transceivers to
accommodate different media requirements. In books and technical literature
the transceiver, its tap, and its housing are often referred to as the medium
attachment unit (MAU).
The transceiver is responsible for carrier detection and collision detection.
When a collision is detected during a transmission, the transceiver places
a special signal, known as a jam, on the cable. This signal, described in
Chapter 4, is of sufficient duration to propagate down the network bus and
inform all of the other transceivers attached to the bus node that a collision
has occurred.
The cable that connects the interface board to the transceiver is known
as the transceiver cable. This cable can be up to 50 meters (165 feet) in
length and contains five individually shielded twisted pairs. Two pairs are
used for data in and data out, and two pairs are used for control signals in
and out. The remaining pair, which is not always used, permits the power
from the computer in which the interface board is inserted to power the
transceiver.
Because collision detection is a critical part of the CSMA/CD access protocol,
the original version of Ethernet was modified to inform the interface board that
the transceiver collision circuitry is operational. This modification resulted in
each transceiver’s sending a signal to the attached interface board after every
transmission, informing the board that the transceiver’s collision circuitry
is operational. This signal is sent by the transceiver over the collision pair
of the transceiver cable and must start within 0.6 microseconds after each
frame is transmitted. The duration of the signal can vary between 0.5 and
1.5 microseconds. Known as the signal quality error and also referred to
as the SQE or heartbeat, this signal is supported by Ethernet Version 2.0,
published as a standard in 1982, and by the IEEE 802.3 standard. Although
the heartbeat (SQE) is between the transceiver and the system to which it is
attached, under the IEEE 802.3 standard transceivers attached to a repeater
must have their heartbeat disabled.
The SQE signal is simply a delayed response by a few bit times to the
transmission of each frame, informing the interface card that everything is
working normally. Because the SQE signal only flows from the transceiver
back to the interface card, it does not delay packet transmission nor does it
flow onto the network. Today most transceivers have a switch or jumper that
enables the SQE signal, commonly labeled SQE Test, to be disabled. Because
repeaters must monitor signals in real time and cannot use the Ethernet time
gap of 9.6 ms between frames (which we will discuss later in this book), this
means that they are not capable of recognizing a heartbeat signal. It should be
noted that a twisted-pair 10BASE-T Ethernet hub is also a repeater. If you fail
to disable the SQE Test signal, the repeater electronics to include hub ports
will misinterpret the signal as a collision. This will result in the transmission
of a jam signal on all hub ports other than the port receiving the SQE Test
signal, significantly degrading network performance.
Interface Board
The interface board, or network interface card (NIC), is inserted into an
expansion slot within a computer and is responsible for transmitting frames
to and receiving frames from the transceiver. This board contains several
special chips, including a controller chip that assembles data into an Ethernet
frame and computes the cyclic redundancy check used for error detection.
Thus, this board is also referred to as an Ethernet controller.
Most Ethernet interface boards contain a DB-15 connector for connecting
the board to the transceiver. Once thin Ethernet cabling became popular,
many manufacturers made their interface boards with both DB-15 and BNC
connectors. The latter was used to permit the interface board to be connected
to a thin Ethernet cable through the use of a T-connector. Figure 3.2 illustrates
the rear panel of a network interface card containing both DB-15 and BNC
connectors. With the development of twisted-pair-based Ethernet, such as
10BASE-T, modern Ethernet interface boards, which are commonly referred
to as network interface cards (NICs), also include an RJ-45 connector to
accommodate a connection to twisted-wire-based networks.
Cabling Restrictions
Under the Ethernet standard developed by Xerox, Digital Equipment Corporation,
and Intel Corporation, a thick coaxial cable is permitted a maximum
length of 500 meters (1640 feet). Multiple cable segments can be joined
together through the use of repeaters; however, the maximum cable distance
between two transceivers is limited to 2.5 km (8200 feet), and no more
than four repeaters can be traversed on any path between transceivers.
Each thick trunk cable segment must be terminated with what is known as
an N-series connector on each end of the cable. The terminator ‘‘terminates’’
Figure 3.2 Ethernet interface board connectors. The first
generation Ethernet interface boards (network interface
cards) contain both DB-15 and BNC connectors to support
the use of either thick or thin coaxial cable. A second
generation of interface cards included an RJ-45 connector
to Accommodate a connection to twisted-wire-based
networks.
the network and blocks electrical interference from flowing onto what would
otherwise be exposed cable. One N-series connector also serves as a ground,
when used with an attached grounding wire that can be connected to the
middle screw of a dual AC electrical power outlet.
Figure 3.3 illustrates a thick Ethernet cable segment after an installer fastened
N-series plugs to each cable end. This is normally accomplished after
the desired length of coaxial cable is routed to form the required network bus.
Next, an N-series terminator connector is fastened onto one N-series plug,
while an N-series terminator with ground wire is fastened onto the N-series
plug at the opposite end of the cable segment.
In addition, as previously mentioned, attachments to the common bus must
be separated by multiples of 2.5 meters. The latter cabling restriction prevents
reflections caused by taps in the main cable from adding up in phase and being
mistaken by one transceiver for another’s transmission. For the total network,
up to 1024 attachments are allowed, including all cable sections connected
through the use of repeaters; however, no more than 100 transceivers can be
on any one cable segment.
One of the key concepts behind Ethernet—that of allocating the use of a shared
channel—can be traced to the pioneering efforts of Dr. Norman Abramson
and his colleagues at the University of Hawaii during the early 1970s. Using a
ground-based radio broadcasting system to connect different locations through
the use of a shared channel, Abramson and his colleagues developed the concept
of listening to the channel before transmission, transmitting a frame of
information, listening to the channel output to determine whether a collision
occurred, and, if it did, waiting a random period of time before retransmission.
The resulting University of Hawaii ground-based radio broadcasting system,
called ALOHA, formed the basis for the development of numerous channel
contention systems, including Ethernet. In addition, the subdivision of transmission
into frames of data was the pioneering work in the development of
packet-switching networks. Thus, Norman Abramson and his colleagues can
be considered the forefathers of two of the most important communications
technologies, contention networks and packet-switching networks.
Evolution
The actual development of Ethernet occurred at the Xerox Palo Alto Research
Center (PARC) in Palo Alto, California. A development team headed by
Dr. Robert Metcalfe had to connect over 100 computers on a 1-km cable. The
resulting system, which operated at 2.94 Mbps using the CSMA/CD access
protocol, was referred to as ‘‘Ethernet’’ in a memorandum authored by Metcalfe.
He named it after the luminiferous ether through which electromagnetic
radiation was once thought to propagate.
During its progression from a research-based network into a manufactured
product, Ethernet suffered several identity crises. During the 1970s, it endured
such temporary names as the ‘‘Alto Aloha Network’’ and the ‘‘Xerox Wire.’’
After reverting to the original name, Xerox decided, quite wisely, that the
establishment of Ethernet as an industry standard for local area networks
would be expedited by an alliance with other vendors. A resulting alliance
with Digital Equipment Corporation and Intel Corporation, which was known
as the DIX Consortium, resulted in the development of a 10-Mbps Ethernet network.
It also provided Ethernet with a significant advantage over Datapoint’s
ARCNet and Wang Laboratories’ Wangnet, proprietary local area networks
that were the main competitors to Ethernet during the 1970s.
The alliance between Digital Equipment, Intel, and Xerox resulted in the
publication of a ‘‘Blue Book Standard’’ for Ethernet Version 1. An enhancement
to that standard occurred in 1982 and is referred to as Ethernet Version 2
or Ethernet II in many technical publications. Although the DIX Consortium
submitted its Ethernet specification to the IEEE in 1980, it wasn’t until 1982
that the IEEE 802.3 CSMA/CD standard was promulgated. Because the IEEE
used Ethernet Version 2 as the basis for the 802.3 CSMA/CD standard, and
Ethernet Version 1 has been obsolete for over approximately two decades, we
will refer to Ethernet Version 2 as Ethernet in the remainder of this book.
Network Components
The 10-Mbps Ethernet network standard originally developed by Xerox,
Digital Equipment Corporation, and Intel was based on the use of five hardware
components. Those components include a coaxial cable, a cable tap,
a transceiver, a transceiver cable, and an interface board (also known as
an Ethernet controller). Figure 3.1 illustrates the relationships among Ethernet
components.
Coaxial Cable
One of the problems faced by the designers of Ethernet was the selection of an
appropriatemedium. Although twisted-pair wire is relatively inexpensive and
easy to use, the short distances between twists serve as an antenna for receiving
electromagnetic and radio frequency interference in the form of noise. Thus,
the use of twisted-pair cable restricts the network to relatively short distances.
Coaxial cable, however, has a dielectric shielding the conductor. As long
as the ends of the cable are terminated, coaxial cable can transmit over
greater distances than twisted-pair cable. Because the original development of
Ethernet was oriented toward interconnecting computers located in different
Ethernet hardware components.When thick coaxial cable is used
for the bus, an Ethernet cable connection is made with a transceiver cable and
a transceiver tapped into the cable.
buildings, the use of coaxial cable was well suited for this requirement. Thus,
the initial selection for Ethernet transmission medium was coaxial cable.
There are two types of coaxial cable that can be used to form the main
Ethernet bus. The first type of coaxial cable specified for Ethernet was a relatively
thick 50-ohm cable, which is normally colored yellow and is commonly
referred to as ‘‘thick’’ Ethernet. This cable has a marking every 2.5 meters to
indicate where a tap should occur, if one is required to connect a station to the
main cable at a particular location. These markings represent the minimum
distance one tap must be separated from another on an Ethernet network.
The outer insulation or jacket of the yellow-colored cable is constructed using
PVC. A second popular type of 50-ohm cable has a Teflon jacket and is colored
orange-brown. The Teflon jacket coax is used for plenum-required installations
in air-handling spaces, referred to as plenums, to satisfy fire regulations.
When installing a thick coaxial segment the cable should be rolled from a
common cable spool or cable spools manufactured at the same time, referred
to as a similar cable lot, to minimize irregularities between cables. Under
the Ethernet specifications when the use of cable from different lots cannot
be avoided, cable sections should be used that are either 23.4 m, 70.2 m,
or 117 m in length. Those cable lengths minimize the possibility of excessive
signal reflections occurring due to variances in the minor differences
in cable produced by different vendors or from different cable lots from the
same vendor.
A second type of coaxial cable used with Ethernet is smaller and more
flexible; however, it is capable of providing a transmission distance only onethird
of that obtainable on thick cable. This lighter and more flexible cable is
referred to as ‘‘thin’’ Ethernet and also has an impedance of 50 ohms. When
the IEEE standardized Ethernet, the thick coaxial cable–based network was
assigned the designation 10BASE-5, while the network that uses the thinner
cable was assigned the designator 10BASE-2. Later in this chapter we will
examine IEEE 802.3 networks under which 10BASE-5, 10BASE-2, and other
Ethernet network designators are defined.
Two of the major advantages of thin Ethernet over thick cable are its cost
and its use of BNC connectors. Thin Ethernet is significantly less expensive
than thick Ethernet. Thick Ethernet requires connections via taps, whereas
the use of thin Ethernet permits connections to the bus via industry standard
BNC connectors that form T-junctions.
Transceiver and Transceiver Cable
Transceiver is a shortened form of transmitter-receiver. This device contains
electronics to transmit and receive signals carried by the coaxial cable.
The transceiver contains a tap that, when pushed against the coaxial cable,
penetrates the cable and makes contact with the core of the cable. Ethernet
transceivers are used for broadband transmission on a coaxial cable and
usually include a removable tap assembly. The latter enables vendors to
manufacture transceivers that can operate on thick and thin coaxial cable,
enabling network installers to change only the tap instead of the entire device
and eliminating the necessity to purchase multiple types of transceivers to
accommodate different media requirements. In books and technical literature
the transceiver, its tap, and its housing are often referred to as the medium
attachment unit (MAU).
The transceiver is responsible for carrier detection and collision detection.
When a collision is detected during a transmission, the transceiver places
a special signal, known as a jam, on the cable. This signal, described in
Chapter 4, is of sufficient duration to propagate down the network bus and
inform all of the other transceivers attached to the bus node that a collision
has occurred.
The cable that connects the interface board to the transceiver is known
as the transceiver cable. This cable can be up to 50 meters (165 feet) in
length and contains five individually shielded twisted pairs. Two pairs are
used for data in and data out, and two pairs are used for control signals in
and out. The remaining pair, which is not always used, permits the power
from the computer in which the interface board is inserted to power the
transceiver.
Because collision detection is a critical part of the CSMA/CD access protocol,
the original version of Ethernet was modified to inform the interface board that
the transceiver collision circuitry is operational. This modification resulted in
each transceiver’s sending a signal to the attached interface board after every
transmission, informing the board that the transceiver’s collision circuitry
is operational. This signal is sent by the transceiver over the collision pair
of the transceiver cable and must start within 0.6 microseconds after each
frame is transmitted. The duration of the signal can vary between 0.5 and
1.5 microseconds. Known as the signal quality error and also referred to
as the SQE or heartbeat, this signal is supported by Ethernet Version 2.0,
published as a standard in 1982, and by the IEEE 802.3 standard. Although
the heartbeat (SQE) is between the transceiver and the system to which it is
attached, under the IEEE 802.3 standard transceivers attached to a repeater
must have their heartbeat disabled.
The SQE signal is simply a delayed response by a few bit times to the
transmission of each frame, informing the interface card that everything is
working normally. Because the SQE signal only flows from the transceiver
back to the interface card, it does not delay packet transmission nor does it
flow onto the network. Today most transceivers have a switch or jumper that
enables the SQE signal, commonly labeled SQE Test, to be disabled. Because
repeaters must monitor signals in real time and cannot use the Ethernet time
gap of 9.6 ms between frames (which we will discuss later in this book), this
means that they are not capable of recognizing a heartbeat signal. It should be
noted that a twisted-pair 10BASE-T Ethernet hub is also a repeater. If you fail
to disable the SQE Test signal, the repeater electronics to include hub ports
will misinterpret the signal as a collision. This will result in the transmission
of a jam signal on all hub ports other than the port receiving the SQE Test
signal, significantly degrading network performance.
Interface Board
The interface board, or network interface card (NIC), is inserted into an
expansion slot within a computer and is responsible for transmitting frames
to and receiving frames from the transceiver. This board contains several
special chips, including a controller chip that assembles data into an Ethernet
frame and computes the cyclic redundancy check used for error detection.
Thus, this board is also referred to as an Ethernet controller.
Most Ethernet interface boards contain a DB-15 connector for connecting
the board to the transceiver. Once thin Ethernet cabling became popular,
many manufacturers made their interface boards with both DB-15 and BNC
connectors. The latter was used to permit the interface board to be connected
to a thin Ethernet cable through the use of a T-connector. Figure 3.2 illustrates
the rear panel of a network interface card containing both DB-15 and BNC
connectors. With the development of twisted-pair-based Ethernet, such as
10BASE-T, modern Ethernet interface boards, which are commonly referred
to as network interface cards (NICs), also include an RJ-45 connector to
accommodate a connection to twisted-wire-based networks.
Cabling Restrictions
Under the Ethernet standard developed by Xerox, Digital Equipment Corporation,
and Intel Corporation, a thick coaxial cable is permitted a maximum
length of 500 meters (1640 feet). Multiple cable segments can be joined
together through the use of repeaters; however, the maximum cable distance
between two transceivers is limited to 2.5 km (8200 feet), and no more
than four repeaters can be traversed on any path between transceivers.
Each thick trunk cable segment must be terminated with what is known as
an N-series connector on each end of the cable. The terminator ‘‘terminates’’
Figure 3.2 Ethernet interface board connectors. The first
generation Ethernet interface boards (network interface
cards) contain both DB-15 and BNC connectors to support
the use of either thick or thin coaxial cable. A second
generation of interface cards included an RJ-45 connector
to Accommodate a connection to twisted-wire-based
networks.
the network and blocks electrical interference from flowing onto what would
otherwise be exposed cable. One N-series connector also serves as a ground,
when used with an attached grounding wire that can be connected to the
middle screw of a dual AC electrical power outlet.
Figure 3.3 illustrates a thick Ethernet cable segment after an installer fastened
N-series plugs to each cable end. This is normally accomplished after
the desired length of coaxial cable is routed to form the required network bus.
Next, an N-series terminator connector is fastened onto one N-series plug,
while an N-series terminator with ground wire is fastened onto the N-series
plug at the opposite end of the cable segment.
In addition, as previously mentioned, attachments to the common bus must
be separated by multiples of 2.5 meters. The latter cabling restriction prevents
reflections caused by taps in the main cable from adding up in phase and being
mistaken by one transceiver for another’s transmission. For the total network,
up to 1024 attachments are allowed, including all cable sections connected
through the use of repeaters; however, no more than 100 transceivers can be
on any one cable segment.
Networking Standards
Standards Organizations
In this chapter, we will first focus our attention on two national and two
international standards organizations. The national standards organizations
we will briefly discuss in this section are the American National Standards
Institute (ANSI) and the Institute of Electrical and Electronics Engineers
(IEEE). The work of both organizations has been a guiding force in the rapid
expansion in the use of local area networks due to a series of standards they
have developed. Due to the importance of the work of the IEEE in developing
LAN standards, we will examine those standards as a separate entity in the
next section in this chapter. In the international arena, we will discuss the
role of the International Telecommunications Union (ITU), formerly known
as the Consultative Committee for International Telephone and Telegraph
(CCITT), and the International Standards Organization (ISO), both of which
have developed numerous standards to facilitate the operation of local and
wide area networks.
Because of the importance of the ISO’s Open Systems Interconnection (OSI)
Reference Model and the IEEE’s 802 Committee lower layer standards, we
will examine each as a separate entity in this chapter. Because a series of
Internet standards define the manner by which the TCP/IP protocol suite
can transport data between LANs and WANs, we will also discuss what are
referred to as Requests For Comments (RFCs). Because we must understand
the OSI Reference Model before examining the effect of the efforts of the
IEEE and ANSI upon the lower layers of that model and the role of RFCs,
we will look at the OSI Reference Model before examining the role of other
standards.
National Standards Organizations
The two national standards organizations we will briefly discuss are the American
National Standards Institute and the Institute of Electrical and Electronics
Engineers. In the area of local area networking standards, both ANSI and the
IEEE work in conjunction with the ISO to standardize LAN technology.
The ISO delegated the standardization of local area networking technology
to ANSI. The American National Standards Institute, in turn, delegated
lower-speed LAN standards—initially defined as operating rates at and below
50 Mbps—to the IEEE. This resulted in ANSI’s developing standards for the
100-Mbps fiber distributed data interface (FDDI), while the IEEE developed
standards for Ethernet, Token-Ring, and other LANs. Because the IEEE developed
standards for 10-Mbps Ethernet, that organization was tasked with the
responsibility for modifications to that LAN technology. This resulted in the
IEEE becoming responsible for the standardization of high-speed Ethernet
to include isoENET, 100BASE-T, and 100VG-AnyLAN, the latter two representing
100-Mbps LAN operating rates. Another series of IEEE standards
beginning with the prefix of 1000 defines the operation of Gigabit Ethernet
over different types of copper and optical fiber media. In addition, when this
book revision occurred the IEEE was in the process of finalizing a standard for
10 Gbps Ethernet.
Once the IEEE develops and approves a standard, that standard is sent to
ANSI for review. If ANSI approves the standard, it is then sent to the ISO.
Then, the ISO solicits comments from all member countries to ensure that
the standard will work at the international level, resulting in an IEEE- or
ANSI-developed standard becoming an ISO standard.
ANSI
The principal standards-forming body in the United States is the American
National Standards Institute (ANSI). Located in New York City, this nonprofit,
nongovernmental organization was founded in 1918 and functions as the
representative of the United States to the ISO.
American National Standards Institute standards are developed through
the work of its approximately 300 Standards Committees, and from the
efforts of associated groups such as the Electronic Industry Association (EIA).
Recognizing the importance of the computer industry, ANSI established its
X3 Standards Committee in 1960. That committee consists of 25 technical
committees, each assigned to develop standards for a specific technical area.
One of those technical committees is the X3S3 committee, more formally
known as the Data Communications Technical Committee. This committee
was responsible for the ANSI X3T9.5 standard that governs FDDI operations,
and that is now recognized as the ISO 9314 standard.
IEEE
The Institute of Electrical and Electronics Engineers (IEEE) is a U.S.-based
engineering society that is very active in the development of data communications
standards. In fact, the most prominent developer of local area networking
standards is the IEEE, whose subcommittee 802 began its work in 1980 before
they had even established a viable market for the technology.
The IEEE Project 802 efforts are concentrated on the physical interface
between network devices and the procedures and functions required to
establish, maintain, and release connections among them. These procedures
include defining data formats, error control procedures, and other control
activities governing the flow of information. This focus of the IEEE actually
represents the lowest two layers of the ISO model, physical and link, which
are discussed later in this chapter.
International Standards Organizations
Two important international standards organizations are the International
Telecommunications Union (ITU), formerly known as the Consultative
Committee for International Telephone and Telegraph (CCITT), and the
International Standards Organization (ISO). The ITU can be considered a
governmental body, because it functions under the auspices of an agency of
the United Nations. Although the ISO is a nongovernmental agency, its work
in the field of data communications is well recognized.
ITU
The International Telecommunications Union (ITU) is a specialized agency of
the United Nations headquartered in Geneva, Switzerland. The ITU has direct
responsibility for developing data communications standards and consists
of 15 study groups, each with a specific area of responsibility. Although
the CCITT was renamed as the ITU in 1994, it periodically continues to be
recognized by its former mnemonic. Thus, the remainder of this book will refer
to this standards organization by its new set of commonly recognized initials.
The work of the ITU is performed on a four-year cycle known as a study
period. At the conclusion of each study period, a plenary session occurs.
During the plenary session, the work of the ITU during the previous four
years is reviewed, proposed recommendations are considered for adoption,
and items to be investigated during the next four-year cycle are considered.
The ITU’s eleventh plenary session met in 1996 and its twelfth session
occurred during 2000. Although approval of recommended standards is not
intended to be mandatory, ITU recommendations have the effect of law in
some Western European countries, and many of its recommendations have
been adopted by communications carriers and vendors in the United States.
Perhaps the best-known set of ITU recommendations is its V-series, which
describes the operation of many different modem features—for example, data
compression and transmission error detection and correction.
ISO
The International Standards Organization (ISO) is a nongovernmental entity
that has consultative status within the UN Economic and Social Council. The
goal of the ISO is to ‘‘promote the development of standards in the world with
a view to facilitating international exchange of goods and services.’’
The membership of the ISO consists of the national standards organizations
of most countries. There are approximately 100 countries currently
participating in its work.
Perhaps the most notable achievement of the ISO in the field of communications
is its development of the seven-layer Open Systems Interconnection
(OSI) Reference Model.
The ISO Reference Model
The International Standards Organization (ISO) established a framework for
standardizing communications systems called the Open Systems Interconnection
(OSI) ReferenceModel. The OSI architecture defines the communications
process as a set of seven layers, with specific functions isolated and associated
with each layer. Each layer, as illustrated in Figure 2.1, covers lower layer
processes, effectively isolating them from higher layer functions. In this way,
each layer performs a set of functions necessary to provide a set of services to
the layer above it.
Layer isolation permits the characteristics of a given layer to change without
impacting the remainder of the model, provided that the supporting services
remain the same. One major advantage of this layered approach is that users
can mix and match OSI-conforming communications products, and thus tailor
their communications systems to satisfy particular networking requirements.
The OSI Reference Model, while not completely viable with many current
network architectures, offers the potential to connect networks and networking
devices together to form integrated networks, while using equipment from
different vendors. This interconnectivity potentialwill be of substantial benefit
to both users and vendors. For users, interconnectivity will remove the
shackles that in many instances tie them to a particular vendor. For vendors,
the ability to easily interconnect their products will provide them with access
to a larger market. The importance of the OSI model is such that it was adopted
by the ITU as Recommendation X.200.
Layered Architecture
As previously discussed, the OSI Reference Model is based on the establishment
of a layered, or partitioned, architecture. This partitioning effort is
Application----------------Layer 7
Presentation---------------Layer 6
Session--------------------Layer 5
Transport-----------------Layer 4
Network------------------Layer 3
Data Link-----------------Layer 2
Physical------------------Layer 1
Figure 2.1 ISO Reference Model.
derived from the scientific process, in which complex problems are subdivided
into several simpler tasks.
As a result of the application of a partitioning approach to communications
network architecture, the communications process was subdivided into seven
distinct partitions, called layers. Each layer consists of a set of functions
designed to provide a defined series of services. For example, the functions
associated with the physical connection of equipment to a network are referred
to as the physical layer.
With the exception of layers 1 and 7, each layer is bounded by the layers
above and below it. Layer 1, the physical layer, is bound below by the
interconnecting medium over which transmission flows, while layer 7 is the
upper layer and has no upper boundary. Within each layer is a group of
functions that provide a set of defined services to the layer above it, resulting
in layer n using the services of layer n − 1. Thus, the design of a layered
architecture enables the characteristics of a particular layer to change without
affecting the rest of the system, assuming that the services provided by the
layer do not change.
OSI Layers
The best way to gain an understanding of the OSI layers is to examine
a network structure that illustrates the components of a typical wide area
network. Figure 2.2 illustrates a network structure that is typical only in the
sense that it will be used for a discussion of the components upon which
networks are constructed.
The circles in Figure 2.2 represent nodes, which are points where data
enters or exits a network or is switched between two networks connected by
one or more paths. Nodes are connected to other nodes via communications
cables or circuits and can be established on any type of communications
medium, such as cable, microwave, or radio.
From a physical perspective, a node can be based on any of several types of
computers, including a personal computer, minicomputer, mainframe computer,
or specialized computer, such as a front-end processor. Connections to
network nodes into a wide area network can occur via terminal devices, such
as PCs and fixed logic devices, directly connected to computers, terminals
connected to a node via one or more intermediate communications devices,
or paths linking one network to another network. In fact, a workstation on
an Ethernet local area network that provides access into a wide area network
can be considered a network node. In this situation, the workstation can be a
bridge, router, or gateway, and provides a connectivity mechanism between
other stations on the Ethernet local area network and the wide area network.
The routes between two nodes—such as C-E-A, C-D-A, C-A, and C-B-A, all
of which can be used to route data between nodes A and C—are information
paths. Due to the variability in the flow of information through a wide area
network, the shortest path between nodes may not be available for use,
or may be inefficient in comparison to other possible paths. A temporary
connection between two nodes that is based on such parameters as current
network activity is known as a logical connection. This logical connection
represents the use of physical facilities, including paths and temporary nodeswitching
capability.
The major functions of each of the seven OSI layers are described in the
following seven paragraphs.
Layer 1—The Physical Layer
At the lowest or most basic level, the physical layer (level 1) is a set of rules
that specifies the electrical and physical connection between devices. This
level specifies the cable connections and the electrical rules necessary to
transfer data between devices. Typically, the physical link corresponds to
previously established interface standards, such as the RS-232/V.24 interface.
This interface governs the attachment of data terminal equipment, such as the
serial port of personal computers, to data communications equipment, such
as modems.
Layer 2—The Data Link Layer
The next layer, which is known as the data link layer (level 2), denotes
how a device gains access to the medium specified in the physical layer.
It also defines data formats, including the framing of data within transmitted
messages, error control procedures, and other link control activities. Because
it defines data formats, including procedures to correct transmission errors,
this layer becomes responsible for the reliable delivery of information. An
example of a data link control protocol that can reside at this layer is the ITU’s
High-Level Data Link Control (HDLC).
Because the development of OSI layers was originally targeted toward wide
area networking, its applicability to local area networks required a degree of
modification. Under the IEEE 802 standards, the data link layer was initially
divided into two sublayers: logical link control (LLC) and media access control
(MAC). The LLC layer is responsible for generating and interpreting commands
that control the flow of data and perform recovery operations in the event of
errors. In comparison, the MAC layer is responsible for providing access to
the local area network, which enables a station on the network to transmit
information.
With the development of high-speed local area networks designed to operate
on a variety of different types of media, an additional degree of OSI layer
subdivision was required. First, the data link layer required the addition
of a reconciliation layer (RL) to reconcile a medium-independent interface
(MII) signal added to a version of high-speed Ethernet, commonly referred
to as Fast Ethernet. Next, the physical layer used for Fast Ethernet required
a subdivision into three sublayers. One sublayer, known as the physical
coding sublayer (PCS) performs data encoding.Aphysicalmedium attachment
sublayer (PMA) maps messages from the physical coding sublayer to the
transmission media, while a medium-dependent interface (MDI) specifies the
connector for the media used. Similarly, Gigabit Ethernet implements a gigabit
media-independent interface (GMII), which enables different encoding and
decoding methods to be supported that are used with different types of media.
Later in this chapter, we will examine the IEEE 802 subdivision of the data
link and physical layers, as well as the operation of each resulting sublayer.
Layer 3—The Network Layer
The network layer (level 3) is responsible for arranging a logical connection
between the source and destination nodes on the network. This responsibility
includes the selection and management of a route for the flow of information
between source and destination, based on the available data paths in the
network. Services provided by this layer are associated with the movement
of data packets through a network, including addressing, routing, switching,
sequencing, and flow control procedures. In a complex network, the source
and destination may not be directly connected by a single path, but instead
require a path that consists of many subpaths. Thus, routing data through the
network onto the correct paths is an important feature of this layer.
Several protocols have been defined for layer 3, including the ITU X.25
packet switching protocol and the ITU X.75 gateway protocol. X.25 governs
the flow of information through a packet network, while X.75 governs the flow
of information between packet networks. Other popular examples of layer 3
protocols include the Internet Protocol (IP) and Novell’s Internet Packet
Exchange (IPX), both of which represent layers in their respective protocol
suites that were defined before the ISO Reference Model was developed. In
an Ethernet environment the transport unit is a frame. As we will note later
in this book when we examine Ethernet frame formats in Chapter 4, the frame
on a local area network is used as the transport facility to deliver such layer 3
protocols as IP and IPX, which in turn represent the vehicles for delivering
higher-layer protocols in the IP and IPX protocol suites.
Layer 4—The Transport Layer
The transport layer (level 4) is responsible for guaranteeing that the transfer
of information occurs correctly after a route has been established through the
network by the network level protocol. Thus, the primary function of this layer
is to control the communications session between network nodes once a path
has been established by the network control layer. Error control, sequence
checking, and other end-to-end data reliability factors are the primary concern
of this layer, and they enable the transport layer to provide a reliable endto-
end data transfer capability. Examples of popular transport layer protocols
include the Transmission Control Protocol (TCP) and the User Datagram
Protocol (UDP), both of which are part of the TCP/IP protocol suite, and
Novell’s Sequence Packet Exchange (SPX).
Layer 5—The Session Layer
The session layer (level 5) provides a set of rules for establishing and terminating
data streams between nodes in a network. The services that this session
layer can provide include establishing and terminating node connections,
message flow control, dialogue control, and end-to-end data control.
Layer 6—The Presentation Layer
The presentation layer (level 6) services are concerned with data transformation,
formatting, and syntax. One of the primary functions performed by the
presentation layer is the conversion of transmitted data into a display format
appropriate for a receiving device. This can include any necessary conversion
between ASCII and EBCDIC codes. Data encryption/decryption and data compression/
decompression are additional examples of the data transformation
that can be handled by this layer.
Layer 7—The Application Layer
Finally, the application layer (level 7) acts as a window through which the
application gains access to all of the services provided by the model. Examples
of functions performed at this level include file transfers, resource sharing,
and database access. While the first four layers are fairly well defined, the
top three layers may vary considerably, depending on the network protocol
used. For example, the TCP/IP protocol, which predates the OSI Reference
Model, groups layer 5 through layer 7 functions into a single application
layer. In Chapter 5 when we examine Internet connectivity, we will also
examine the relationship of the TCP/IP protocol stack to the seven-layer OSI
Reference Model.
Figure 2.3 illustrates the OSI model in schematic format, showing the
various levels of the modelwith respect to a terminal device, such as a personal
computer accessing an application on a host computer system. Although
Figure 2.3 shows communications occurring via a modem connection on
a wide area network, the OSI model schematic is also applicable to local
area networks. Thus, the terminal shown in the figure could be replaced
by a workstation on an Ethernet network while the front-end processor
(FEP) would, via a connection to that network, become a participant on
that network.
Data Flow
As data flows within an ISO network, each layer appends appropriate heading
information to frames of information flowing within the network, while
removing the heading information added by a lower layer. In this manner,
layer n interacts with layer n − 1 as data flows through an ISO network.
Figure 2.4 illustrates the appending and removal of frame header information
as data flows through a network constructed according to the ISO
Reference Model. Because each higher level removes the header appended by
a lower level, the frame traversing the network arrives in its original form at
its destination.
As you will surmise from the previous illustrations, the ISO Reference
Model is designed to simplify the construction of data networks. This simplification
is due to the potential standardization of methods and procedures
to append appropriate heading information to frames flowing through a
network, permitting data to be routed to its appropriate destination following
a uniform procedure.
In this chapter, we will first focus our attention on two national and two
international standards organizations. The national standards organizations
we will briefly discuss in this section are the American National Standards
Institute (ANSI) and the Institute of Electrical and Electronics Engineers
(IEEE). The work of both organizations has been a guiding force in the rapid
expansion in the use of local area networks due to a series of standards they
have developed. Due to the importance of the work of the IEEE in developing
LAN standards, we will examine those standards as a separate entity in the
next section in this chapter. In the international arena, we will discuss the
role of the International Telecommunications Union (ITU), formerly known
as the Consultative Committee for International Telephone and Telegraph
(CCITT), and the International Standards Organization (ISO), both of which
have developed numerous standards to facilitate the operation of local and
wide area networks.
Because of the importance of the ISO’s Open Systems Interconnection (OSI)
Reference Model and the IEEE’s 802 Committee lower layer standards, we
will examine each as a separate entity in this chapter. Because a series of
Internet standards define the manner by which the TCP/IP protocol suite
can transport data between LANs and WANs, we will also discuss what are
referred to as Requests For Comments (RFCs). Because we must understand
the OSI Reference Model before examining the effect of the efforts of the
IEEE and ANSI upon the lower layers of that model and the role of RFCs,
we will look at the OSI Reference Model before examining the role of other
standards.
National Standards Organizations
The two national standards organizations we will briefly discuss are the American
National Standards Institute and the Institute of Electrical and Electronics
Engineers. In the area of local area networking standards, both ANSI and the
IEEE work in conjunction with the ISO to standardize LAN technology.
The ISO delegated the standardization of local area networking technology
to ANSI. The American National Standards Institute, in turn, delegated
lower-speed LAN standards—initially defined as operating rates at and below
50 Mbps—to the IEEE. This resulted in ANSI’s developing standards for the
100-Mbps fiber distributed data interface (FDDI), while the IEEE developed
standards for Ethernet, Token-Ring, and other LANs. Because the IEEE developed
standards for 10-Mbps Ethernet, that organization was tasked with the
responsibility for modifications to that LAN technology. This resulted in the
IEEE becoming responsible for the standardization of high-speed Ethernet
to include isoENET, 100BASE-T, and 100VG-AnyLAN, the latter two representing
100-Mbps LAN operating rates. Another series of IEEE standards
beginning with the prefix of 1000 defines the operation of Gigabit Ethernet
over different types of copper and optical fiber media. In addition, when this
book revision occurred the IEEE was in the process of finalizing a standard for
10 Gbps Ethernet.
Once the IEEE develops and approves a standard, that standard is sent to
ANSI for review. If ANSI approves the standard, it is then sent to the ISO.
Then, the ISO solicits comments from all member countries to ensure that
the standard will work at the international level, resulting in an IEEE- or
ANSI-developed standard becoming an ISO standard.
ANSI
The principal standards-forming body in the United States is the American
National Standards Institute (ANSI). Located in New York City, this nonprofit,
nongovernmental organization was founded in 1918 and functions as the
representative of the United States to the ISO.
American National Standards Institute standards are developed through
the work of its approximately 300 Standards Committees, and from the
efforts of associated groups such as the Electronic Industry Association (EIA).
Recognizing the importance of the computer industry, ANSI established its
X3 Standards Committee in 1960. That committee consists of 25 technical
committees, each assigned to develop standards for a specific technical area.
One of those technical committees is the X3S3 committee, more formally
known as the Data Communications Technical Committee. This committee
was responsible for the ANSI X3T9.5 standard that governs FDDI operations,
and that is now recognized as the ISO 9314 standard.
IEEE
The Institute of Electrical and Electronics Engineers (IEEE) is a U.S.-based
engineering society that is very active in the development of data communications
standards. In fact, the most prominent developer of local area networking
standards is the IEEE, whose subcommittee 802 began its work in 1980 before
they had even established a viable market for the technology.
The IEEE Project 802 efforts are concentrated on the physical interface
between network devices and the procedures and functions required to
establish, maintain, and release connections among them. These procedures
include defining data formats, error control procedures, and other control
activities governing the flow of information. This focus of the IEEE actually
represents the lowest two layers of the ISO model, physical and link, which
are discussed later in this chapter.
International Standards Organizations
Two important international standards organizations are the International
Telecommunications Union (ITU), formerly known as the Consultative
Committee for International Telephone and Telegraph (CCITT), and the
International Standards Organization (ISO). The ITU can be considered a
governmental body, because it functions under the auspices of an agency of
the United Nations. Although the ISO is a nongovernmental agency, its work
in the field of data communications is well recognized.
ITU
The International Telecommunications Union (ITU) is a specialized agency of
the United Nations headquartered in Geneva, Switzerland. The ITU has direct
responsibility for developing data communications standards and consists
of 15 study groups, each with a specific area of responsibility. Although
the CCITT was renamed as the ITU in 1994, it periodically continues to be
recognized by its former mnemonic. Thus, the remainder of this book will refer
to this standards organization by its new set of commonly recognized initials.
The work of the ITU is performed on a four-year cycle known as a study
period. At the conclusion of each study period, a plenary session occurs.
During the plenary session, the work of the ITU during the previous four
years is reviewed, proposed recommendations are considered for adoption,
and items to be investigated during the next four-year cycle are considered.
The ITU’s eleventh plenary session met in 1996 and its twelfth session
occurred during 2000. Although approval of recommended standards is not
intended to be mandatory, ITU recommendations have the effect of law in
some Western European countries, and many of its recommendations have
been adopted by communications carriers and vendors in the United States.
Perhaps the best-known set of ITU recommendations is its V-series, which
describes the operation of many different modem features—for example, data
compression and transmission error detection and correction.
ISO
The International Standards Organization (ISO) is a nongovernmental entity
that has consultative status within the UN Economic and Social Council. The
goal of the ISO is to ‘‘promote the development of standards in the world with
a view to facilitating international exchange of goods and services.’’
The membership of the ISO consists of the national standards organizations
of most countries. There are approximately 100 countries currently
participating in its work.
Perhaps the most notable achievement of the ISO in the field of communications
is its development of the seven-layer Open Systems Interconnection
(OSI) Reference Model.
The ISO Reference Model
The International Standards Organization (ISO) established a framework for
standardizing communications systems called the Open Systems Interconnection
(OSI) ReferenceModel. The OSI architecture defines the communications
process as a set of seven layers, with specific functions isolated and associated
with each layer. Each layer, as illustrated in Figure 2.1, covers lower layer
processes, effectively isolating them from higher layer functions. In this way,
each layer performs a set of functions necessary to provide a set of services to
the layer above it.
Layer isolation permits the characteristics of a given layer to change without
impacting the remainder of the model, provided that the supporting services
remain the same. One major advantage of this layered approach is that users
can mix and match OSI-conforming communications products, and thus tailor
their communications systems to satisfy particular networking requirements.
The OSI Reference Model, while not completely viable with many current
network architectures, offers the potential to connect networks and networking
devices together to form integrated networks, while using equipment from
different vendors. This interconnectivity potentialwill be of substantial benefit
to both users and vendors. For users, interconnectivity will remove the
shackles that in many instances tie them to a particular vendor. For vendors,
the ability to easily interconnect their products will provide them with access
to a larger market. The importance of the OSI model is such that it was adopted
by the ITU as Recommendation X.200.
Layered Architecture
As previously discussed, the OSI Reference Model is based on the establishment
of a layered, or partitioned, architecture. This partitioning effort is
Application----------------Layer 7
Presentation---------------Layer 6
Session--------------------Layer 5
Transport-----------------Layer 4
Network------------------Layer 3
Data Link-----------------Layer 2
Physical------------------Layer 1
Figure 2.1 ISO Reference Model.
derived from the scientific process, in which complex problems are subdivided
into several simpler tasks.
As a result of the application of a partitioning approach to communications
network architecture, the communications process was subdivided into seven
distinct partitions, called layers. Each layer consists of a set of functions
designed to provide a defined series of services. For example, the functions
associated with the physical connection of equipment to a network are referred
to as the physical layer.
With the exception of layers 1 and 7, each layer is bounded by the layers
above and below it. Layer 1, the physical layer, is bound below by the
interconnecting medium over which transmission flows, while layer 7 is the
upper layer and has no upper boundary. Within each layer is a group of
functions that provide a set of defined services to the layer above it, resulting
in layer n using the services of layer n − 1. Thus, the design of a layered
architecture enables the characteristics of a particular layer to change without
affecting the rest of the system, assuming that the services provided by the
layer do not change.
OSI Layers
The best way to gain an understanding of the OSI layers is to examine
a network structure that illustrates the components of a typical wide area
network. Figure 2.2 illustrates a network structure that is typical only in the
sense that it will be used for a discussion of the components upon which
networks are constructed.
The circles in Figure 2.2 represent nodes, which are points where data
enters or exits a network or is switched between two networks connected by
one or more paths. Nodes are connected to other nodes via communications
cables or circuits and can be established on any type of communications
medium, such as cable, microwave, or radio.
From a physical perspective, a node can be based on any of several types of
computers, including a personal computer, minicomputer, mainframe computer,
or specialized computer, such as a front-end processor. Connections to
network nodes into a wide area network can occur via terminal devices, such
as PCs and fixed logic devices, directly connected to computers, terminals
connected to a node via one or more intermediate communications devices,
or paths linking one network to another network. In fact, a workstation on
an Ethernet local area network that provides access into a wide area network
can be considered a network node. In this situation, the workstation can be a
bridge, router, or gateway, and provides a connectivity mechanism between
other stations on the Ethernet local area network and the wide area network.
The routes between two nodes—such as C-E-A, C-D-A, C-A, and C-B-A, all
of which can be used to route data between nodes A and C—are information
paths. Due to the variability in the flow of information through a wide area
network, the shortest path between nodes may not be available for use,
or may be inefficient in comparison to other possible paths. A temporary
connection between two nodes that is based on such parameters as current
network activity is known as a logical connection. This logical connection
represents the use of physical facilities, including paths and temporary nodeswitching
capability.
The major functions of each of the seven OSI layers are described in the
following seven paragraphs.
Layer 1—The Physical Layer
At the lowest or most basic level, the physical layer (level 1) is a set of rules
that specifies the electrical and physical connection between devices. This
level specifies the cable connections and the electrical rules necessary to
transfer data between devices. Typically, the physical link corresponds to
previously established interface standards, such as the RS-232/V.24 interface.
This interface governs the attachment of data terminal equipment, such as the
serial port of personal computers, to data communications equipment, such
as modems.
Layer 2—The Data Link Layer
The next layer, which is known as the data link layer (level 2), denotes
how a device gains access to the medium specified in the physical layer.
It also defines data formats, including the framing of data within transmitted
messages, error control procedures, and other link control activities. Because
it defines data formats, including procedures to correct transmission errors,
this layer becomes responsible for the reliable delivery of information. An
example of a data link control protocol that can reside at this layer is the ITU’s
High-Level Data Link Control (HDLC).
Because the development of OSI layers was originally targeted toward wide
area networking, its applicability to local area networks required a degree of
modification. Under the IEEE 802 standards, the data link layer was initially
divided into two sublayers: logical link control (LLC) and media access control
(MAC). The LLC layer is responsible for generating and interpreting commands
that control the flow of data and perform recovery operations in the event of
errors. In comparison, the MAC layer is responsible for providing access to
the local area network, which enables a station on the network to transmit
information.
With the development of high-speed local area networks designed to operate
on a variety of different types of media, an additional degree of OSI layer
subdivision was required. First, the data link layer required the addition
of a reconciliation layer (RL) to reconcile a medium-independent interface
(MII) signal added to a version of high-speed Ethernet, commonly referred
to as Fast Ethernet. Next, the physical layer used for Fast Ethernet required
a subdivision into three sublayers. One sublayer, known as the physical
coding sublayer (PCS) performs data encoding.Aphysicalmedium attachment
sublayer (PMA) maps messages from the physical coding sublayer to the
transmission media, while a medium-dependent interface (MDI) specifies the
connector for the media used. Similarly, Gigabit Ethernet implements a gigabit
media-independent interface (GMII), which enables different encoding and
decoding methods to be supported that are used with different types of media.
Later in this chapter, we will examine the IEEE 802 subdivision of the data
link and physical layers, as well as the operation of each resulting sublayer.
Layer 3—The Network Layer
The network layer (level 3) is responsible for arranging a logical connection
between the source and destination nodes on the network. This responsibility
includes the selection and management of a route for the flow of information
between source and destination, based on the available data paths in the
network. Services provided by this layer are associated with the movement
of data packets through a network, including addressing, routing, switching,
sequencing, and flow control procedures. In a complex network, the source
and destination may not be directly connected by a single path, but instead
require a path that consists of many subpaths. Thus, routing data through the
network onto the correct paths is an important feature of this layer.
Several protocols have been defined for layer 3, including the ITU X.25
packet switching protocol and the ITU X.75 gateway protocol. X.25 governs
the flow of information through a packet network, while X.75 governs the flow
of information between packet networks. Other popular examples of layer 3
protocols include the Internet Protocol (IP) and Novell’s Internet Packet
Exchange (IPX), both of which represent layers in their respective protocol
suites that were defined before the ISO Reference Model was developed. In
an Ethernet environment the transport unit is a frame. As we will note later
in this book when we examine Ethernet frame formats in Chapter 4, the frame
on a local area network is used as the transport facility to deliver such layer 3
protocols as IP and IPX, which in turn represent the vehicles for delivering
higher-layer protocols in the IP and IPX protocol suites.
Layer 4—The Transport Layer
The transport layer (level 4) is responsible for guaranteeing that the transfer
of information occurs correctly after a route has been established through the
network by the network level protocol. Thus, the primary function of this layer
is to control the communications session between network nodes once a path
has been established by the network control layer. Error control, sequence
checking, and other end-to-end data reliability factors are the primary concern
of this layer, and they enable the transport layer to provide a reliable endto-
end data transfer capability. Examples of popular transport layer protocols
include the Transmission Control Protocol (TCP) and the User Datagram
Protocol (UDP), both of which are part of the TCP/IP protocol suite, and
Novell’s Sequence Packet Exchange (SPX).
Layer 5—The Session Layer
The session layer (level 5) provides a set of rules for establishing and terminating
data streams between nodes in a network. The services that this session
layer can provide include establishing and terminating node connections,
message flow control, dialogue control, and end-to-end data control.
Layer 6—The Presentation Layer
The presentation layer (level 6) services are concerned with data transformation,
formatting, and syntax. One of the primary functions performed by the
presentation layer is the conversion of transmitted data into a display format
appropriate for a receiving device. This can include any necessary conversion
between ASCII and EBCDIC codes. Data encryption/decryption and data compression/
decompression are additional examples of the data transformation
that can be handled by this layer.
Layer 7—The Application Layer
Finally, the application layer (level 7) acts as a window through which the
application gains access to all of the services provided by the model. Examples
of functions performed at this level include file transfers, resource sharing,
and database access. While the first four layers are fairly well defined, the
top three layers may vary considerably, depending on the network protocol
used. For example, the TCP/IP protocol, which predates the OSI Reference
Model, groups layer 5 through layer 7 functions into a single application
layer. In Chapter 5 when we examine Internet connectivity, we will also
examine the relationship of the TCP/IP protocol stack to the seven-layer OSI
Reference Model.
Figure 2.3 illustrates the OSI model in schematic format, showing the
various levels of the modelwith respect to a terminal device, such as a personal
computer accessing an application on a host computer system. Although
Figure 2.3 shows communications occurring via a modem connection on
a wide area network, the OSI model schematic is also applicable to local
area networks. Thus, the terminal shown in the figure could be replaced
by a workstation on an Ethernet network while the front-end processor
(FEP) would, via a connection to that network, become a participant on
that network.
Data Flow
As data flows within an ISO network, each layer appends appropriate heading
information to frames of information flowing within the network, while
removing the heading information added by a lower layer. In this manner,
layer n interacts with layer n − 1 as data flows through an ISO network.
Figure 2.4 illustrates the appending and removal of frame header information
as data flows through a network constructed according to the ISO
Reference Model. Because each higher level removes the header appended by
a lower level, the frame traversing the network arrives in its original form at
its destination.
As you will surmise from the previous illustrations, the ISO Reference
Model is designed to simplify the construction of data networks. This simplification
is due to the potential standardization of methods and procedures
to append appropriate heading information to frames flowing through a
network, permitting data to be routed to its appropriate destination following
a uniform procedure.
Networking Concepts
Wide Area Networks
The evolution of wide area networks can be considered to have originated
in the mid- to late 1950s, commensurate with the development of the first
generation of computers. Based on the use of vacuum tube technology, the first
generation of computers were large, power-hungry devices whose placement
resulted in a focal point for data processing and the coinage of the term
data center.
Computer-Communications Evolution
Originally, access to the computational capability of first-generation computers
was through the use of punched cards. After an employee of the
organization used a keypunch to create a deck of cards, that card deck was
submitted to a window in the data center, typically labeled input/output (I/O)
control. An employee behind the window would accept the card deck and
complete a form that contained instructions for running the submitted job.
The card deck and instructions would then be sent to a person in production
control, who would schedule the job and turn it over to operations for
execution at a predefined time. Once the job was completed, the card deck
and any resulting output would be sent back to I/O control, enabling the job
originator to return to the window in the data center to retrieve his or her
card deck and the resulting output. With a little bit of luck, programmers
might see the results of their efforts on the same day that they submitted
their jobs.
Because the computer represented a considerable financial investment for
most organizations, it was understandable that these organizations would
be receptive to the possibility of extending their computers’ accessibility.
By the mid-1960s, several computer manufacturers had added remote access
capabilities to one or more of their computers.
Remote Batch Transmission
One method of providing remote access was the installation of a batch
terminal at a remote location. That terminal was connected via a telephone
company–supplied analog leased line and a pair of modems to the computer
in the corporate data center.
The first type of batch terminal developed to communicate with a data
center computer contained a card reader, a printer, a serial communications
adapter, and hard-wired logic in one common housing. The serial communications
adapter converted the parallel bits of each internal byte read from the
card reader into a serial data stream for transmission. Similarly, the adapter
performed a reverse conversion process, converting a sequence of received
serial bits into an appropriate number of parallel bits to represent a character
internally within the batch terminal. Because the batch terminal was located
remotely from the data center, it was often referred to as a remote batch
terminal, while the process of transmitting data was referred to as remote
batch transmission. In addition, the use of a remote terminal as a mechanism
for grouping card decks of individual jobs, all executed at the remote data
center, resulted in the term remote job entry terminal being used as a name
for this device.
Figure 1.1 illustrates in schematic form the relationships between a batch
terminal, transmission line, modems, and the data center computer. Because
the transmission line connects a remote batch terminal in one geographic area
to a computer located in a different geographic area, Figure 1.1 represents one
of the earliest types of wide area data communications networks.
Paralleling the introduction of remote batch terminals was the development
of a series of terminal devices, control units, and specialized communications
equipment, which resulted in the rapid expansion of interactive
computer applications. One of themost prominent collections of products was
introduced by the IBM Corporation under the trade name 3270 Information
Display System.
Remote batch transmission. The transmission of data from a
remote batch terminal represents one of the first examples of wide area data
communications networks.
IBM 3270 Information Display System
The IBM 3270 Information Display System was a term originally used to
describe a collection of products ranging from interactive terminals that
communicate with a computer, referred to as display stations, through several
types of control units and communications controllers. Later, through
the introduction of additional communications products from IBM and
numerous third-party vendors and the replacement of previously introduced
products, the IBM 3270 Information Display System became more
of a networking architecture and strategy rather than a simple collection
of products.
First introduced in 1971, the IBM 3270 Information Display System was
designed to extend the processing power of the data center computer to
remote locations. Because the data center computer typically represented the
organization’s main computer, the term mainframe was coined to refer to a
computer with a large processing capability. As the mainframe was primarily
designed for data processing, its utilization for supporting communications
degraded its performance.
Communications Controller
To offload communications functions from the mainframe, IBM and other
computer manufacturers developed hardware to sample communications
lines for incoming bits, group bits into bytes, and pass a group of bytes
to the mainframe for processing. This hardware also performed a reverse
function for data destined from the mainframe to remote devices. When
first introduced, such hardware was designed using fixed logic circuitry,
and the resulting device was referred to as a communications controller.
Later, minicomputers were developed to execute communications programs,
with the ability to change the functionality of communications support by
the modification of software—a considerable enhancement to the capabilities
of this series of products. Because both hard-wired communications
controllers and programmed minicomputers performing communications
offloaded communications processing from the mainframe, the term frontend
processor evolved to refer to this category of communications equipment.
Although most vendors refer to a minicomputer used to offload communications
processing from the mainframe as a front-end processor, IBM
has retained the term communications controller, even though their fixed
logic hardware products were replaced over 20 years ago by programmable
minicomputers.
Control Units
To reduce the number of controller ports required to support terminals, as
well as the amount of cabling between controller ports and terminals, IBM
developed poll and select software to support its 3270 Information Display
System. This software enabled the communications controller to transmit
messages from one port to one or more terminals in a predefined group
of devices. To share the communications controller port, IBM developed
a product called a control unit, which acts as an interface between the
communications controller and a group of terminals.
In general terms, the communications controller transmits a message to the
control unit. The control unit examines the terminal address and retransmits
the message to the appropriate terminal. Thus, control units are devices that
reduce the number of lines required to link display stations to mainframe computers.
Both local and remote control units are available; the key differences
between them are the method of attachment to the mainframe computer and
the use of intermediate devices between the control unit and the mainframe.
Local control units are usually attached to a channel on the mainframe,
whereas remote control units are connected to the mainframe’s front-end
processor, which is also known as a communications controller in the IBM
environment. Because a local control unit is within a limited distance of the
mainframe, no intermediate communications devices, such as modems or data
service units, are required to connect a local control unit to the mainframe.
In comparison, a remote control unit can be located in another building or
in a different city; it normally requires the utilization of intermediate communications
devices, such as a pair of modems or a pair of data service
units, for communications to occur between the control unit and the communications
controller. The relationship of local and remote control units to
display stations, mainframes, and a communications controller is illustrated
in Figure 1.2.
Network Construction
To provide batch and interactive access to the corporate mainframe from
remote locations, organizations began to build sophisticated networks. At
first, communications equipment such as modems and transmission lines was
obtainable only from AT&T and other telephone companies. Beginning in
1974 in the United States with the well-known Carterphone decision, competitive
non–telephone company sources for the supply of communications
equipment became available. The divestiture of AT&T during the 1980s and
the emergence of many local and long-distance communications carriers
paved the way for networking personnel to be able to select from among
several or even hundreds of vendors for transmission lines and communications
equipment.
As organizations began to link additional remote locations to their mainframes,
the cost of providing communications began to escalate rapidly.
This, in turn, provided the rationale for the development of a series of linesharing
products referred to as multiplexers and concentrators. Although
most organizations operated separate data and voice networks, in the mid-
1980s communications carriers began to make available for commercial use
high-capacity circuits known as T1 in North America and E1 in Europe.
Through the development of T1 and E1 multiplexers, voice, data, and video
transmission can share the use of common high-speed circuits. Because the
interconnection of corporate offices with communications equipment and
facilities normally covers a wide geographical area outside the boundary
of one metropolitan area, the resulting network is known as a wide area
network (WAN).
Figure 1.3 shows an example of a wide area network spanning the continental
United States. In this example, regional offices in San Francisco and New
York are connected with the corporate headquarters, located in Atlanta, via T1
multiplexers and T1 transmission lines operating at 1.544 Mbps. Assuming
that each T1 multiplexer is capable of supporting the direct attachment of
a private branch exchange (PBX), both voice and data are carried by the T1
circuits between the two regional offices and corporate headquarters. The
three T1 circuits can be considered the primary data highway, or backbone,
of the corporate network.
Wide area network example. A WAN uses telecommunications
lines obtained from one or more communications carriers to connect geographically
dispersed locations.
In addition to the three major corporate sites that require the ability to route
voice calls and data between locations, let us assume that the corporation
also has three smaller area offices located in Sacramento, California; Macon,
Georgia; and New Haven, Connecticut. If these locations only require data
terminals to access the corporate network for routing to the computers located
in San Francisco and New York, one possible mechanism to provide network
support is obtained through the use of tail circuits. These tail circuits could
be used to connect a statistical time division multiplexer (STDM) in each area
office, each serving a group of data terminals to the nearest T1 multiplexer,
using either analog or digital circuits. The T1 multiplexer would then be
configured to route data terminal traffic over the corporate backbone portion
of the network to its destination.
Network Characteristics
There are certain characteristics we can associate with wide area networks.
First, theWANis typically designed to connect two or more geographical areas.
This connection is accomplished by the lease of transmission facilities from
one or more communications vendors. Secondly, most WAN transmission
occurs at or under a data rate of 1.544 Mbps or 2.048 Mbps, which are the
operating rates of T1 and E1 transmission facilities.
A third characteristic of WANs concerns the regulation of the transmission
facilities used for their construction. Most, if not all, transmission facilities
marketed by communications carriers are subject to a degree of regulation at
the federal, state, and possibly local government levels. Even though we now
live in an era of deregulation, carriers must seek approval for many offerings
before making new facilities available for use. In addition, although many
of the regulatory controls governing the pricing of services were removed,
the communications market is still not a truly free market. Thus, regulatory
agencies at the federal, state, and local levels still maintain a degree of
control over both the offering and pricing of new services and the pricing of
existing services.
1.2 Local Area Networks
The origin of local area networks can be traced, in part, to IBM terminal equipment
introduced in 1974. At that time, IBM introduced a series of terminal
devices designed for use in transaction-processing applications for banking
and retailing. What was unique about those terminals was their method of connection:
a common cable that formed a loop provided a communications path
within a localized geographical area. Unfortunately, limitations in the data
transfer rate, incompatibility between individual IBM loop systems, and other
problems precluded the widespread adoption of this method of networking.
The economics of media sharing and the ability to provide common access
to a centralized resource were, however, key advantages, and they resulted
in IBM and other vendors investigating the use of different techniques to
provide a localized communications capability between different devices. In
1977, Datapoint Corporation began selling its Attached Resource Computer
Network (ARCNet), considered by most people to be the first commercial local
area networking product. Since then, hundreds of companies have developed
local area networking products, and the installed base of terminal devices
connected to such networks has increased exponentially. They now number
in the hundreds of millions.
Comparison to WANs
Local area networks can be distinguished from wide area networks by geographic
area of coverage, data transmission and error rates, ownership,
government regulation, and data routing—and, in many instances, by the
type of information transmitted over the network.
The evolution of wide area networks can be considered to have originated
in the mid- to late 1950s, commensurate with the development of the first
generation of computers. Based on the use of vacuum tube technology, the first
generation of computers were large, power-hungry devices whose placement
resulted in a focal point for data processing and the coinage of the term
data center.
Computer-Communications Evolution
Originally, access to the computational capability of first-generation computers
was through the use of punched cards. After an employee of the
organization used a keypunch to create a deck of cards, that card deck was
submitted to a window in the data center, typically labeled input/output (I/O)
control. An employee behind the window would accept the card deck and
complete a form that contained instructions for running the submitted job.
The card deck and instructions would then be sent to a person in production
control, who would schedule the job and turn it over to operations for
execution at a predefined time. Once the job was completed, the card deck
and any resulting output would be sent back to I/O control, enabling the job
originator to return to the window in the data center to retrieve his or her
card deck and the resulting output. With a little bit of luck, programmers
might see the results of their efforts on the same day that they submitted
their jobs.
Because the computer represented a considerable financial investment for
most organizations, it was understandable that these organizations would
be receptive to the possibility of extending their computers’ accessibility.
By the mid-1960s, several computer manufacturers had added remote access
capabilities to one or more of their computers.
Remote Batch Transmission
One method of providing remote access was the installation of a batch
terminal at a remote location. That terminal was connected via a telephone
company–supplied analog leased line and a pair of modems to the computer
in the corporate data center.
The first type of batch terminal developed to communicate with a data
center computer contained a card reader, a printer, a serial communications
adapter, and hard-wired logic in one common housing. The serial communications
adapter converted the parallel bits of each internal byte read from the
card reader into a serial data stream for transmission. Similarly, the adapter
performed a reverse conversion process, converting a sequence of received
serial bits into an appropriate number of parallel bits to represent a character
internally within the batch terminal. Because the batch terminal was located
remotely from the data center, it was often referred to as a remote batch
terminal, while the process of transmitting data was referred to as remote
batch transmission. In addition, the use of a remote terminal as a mechanism
for grouping card decks of individual jobs, all executed at the remote data
center, resulted in the term remote job entry terminal being used as a name
for this device.
Figure 1.1 illustrates in schematic form the relationships between a batch
terminal, transmission line, modems, and the data center computer. Because
the transmission line connects a remote batch terminal in one geographic area
to a computer located in a different geographic area, Figure 1.1 represents one
of the earliest types of wide area data communications networks.
Paralleling the introduction of remote batch terminals was the development
of a series of terminal devices, control units, and specialized communications
equipment, which resulted in the rapid expansion of interactive
computer applications. One of themost prominent collections of products was
introduced by the IBM Corporation under the trade name 3270 Information
Display System.
Remote batch transmission. The transmission of data from a
remote batch terminal represents one of the first examples of wide area data
communications networks.
IBM 3270 Information Display System
The IBM 3270 Information Display System was a term originally used to
describe a collection of products ranging from interactive terminals that
communicate with a computer, referred to as display stations, through several
types of control units and communications controllers. Later, through
the introduction of additional communications products from IBM and
numerous third-party vendors and the replacement of previously introduced
products, the IBM 3270 Information Display System became more
of a networking architecture and strategy rather than a simple collection
of products.
First introduced in 1971, the IBM 3270 Information Display System was
designed to extend the processing power of the data center computer to
remote locations. Because the data center computer typically represented the
organization’s main computer, the term mainframe was coined to refer to a
computer with a large processing capability. As the mainframe was primarily
designed for data processing, its utilization for supporting communications
degraded its performance.
Communications Controller
To offload communications functions from the mainframe, IBM and other
computer manufacturers developed hardware to sample communications
lines for incoming bits, group bits into bytes, and pass a group of bytes
to the mainframe for processing. This hardware also performed a reverse
function for data destined from the mainframe to remote devices. When
first introduced, such hardware was designed using fixed logic circuitry,
and the resulting device was referred to as a communications controller.
Later, minicomputers were developed to execute communications programs,
with the ability to change the functionality of communications support by
the modification of software—a considerable enhancement to the capabilities
of this series of products. Because both hard-wired communications
controllers and programmed minicomputers performing communications
offloaded communications processing from the mainframe, the term frontend
processor evolved to refer to this category of communications equipment.
Although most vendors refer to a minicomputer used to offload communications
processing from the mainframe as a front-end processor, IBM
has retained the term communications controller, even though their fixed
logic hardware products were replaced over 20 years ago by programmable
minicomputers.
Control Units
To reduce the number of controller ports required to support terminals, as
well as the amount of cabling between controller ports and terminals, IBM
developed poll and select software to support its 3270 Information Display
System. This software enabled the communications controller to transmit
messages from one port to one or more terminals in a predefined group
of devices. To share the communications controller port, IBM developed
a product called a control unit, which acts as an interface between the
communications controller and a group of terminals.
In general terms, the communications controller transmits a message to the
control unit. The control unit examines the terminal address and retransmits
the message to the appropriate terminal. Thus, control units are devices that
reduce the number of lines required to link display stations to mainframe computers.
Both local and remote control units are available; the key differences
between them are the method of attachment to the mainframe computer and
the use of intermediate devices between the control unit and the mainframe.
Local control units are usually attached to a channel on the mainframe,
whereas remote control units are connected to the mainframe’s front-end
processor, which is also known as a communications controller in the IBM
environment. Because a local control unit is within a limited distance of the
mainframe, no intermediate communications devices, such as modems or data
service units, are required to connect a local control unit to the mainframe.
In comparison, a remote control unit can be located in another building or
in a different city; it normally requires the utilization of intermediate communications
devices, such as a pair of modems or a pair of data service
units, for communications to occur between the control unit and the communications
controller. The relationship of local and remote control units to
display stations, mainframes, and a communications controller is illustrated
in Figure 1.2.
Network Construction
To provide batch and interactive access to the corporate mainframe from
remote locations, organizations began to build sophisticated networks. At
first, communications equipment such as modems and transmission lines was
obtainable only from AT&T and other telephone companies. Beginning in
1974 in the United States with the well-known Carterphone decision, competitive
non–telephone company sources for the supply of communications
equipment became available. The divestiture of AT&T during the 1980s and
the emergence of many local and long-distance communications carriers
paved the way for networking personnel to be able to select from among
several or even hundreds of vendors for transmission lines and communications
equipment.
As organizations began to link additional remote locations to their mainframes,
the cost of providing communications began to escalate rapidly.
This, in turn, provided the rationale for the development of a series of linesharing
products referred to as multiplexers and concentrators. Although
most organizations operated separate data and voice networks, in the mid-
1980s communications carriers began to make available for commercial use
high-capacity circuits known as T1 in North America and E1 in Europe.
Through the development of T1 and E1 multiplexers, voice, data, and video
transmission can share the use of common high-speed circuits. Because the
interconnection of corporate offices with communications equipment and
facilities normally covers a wide geographical area outside the boundary
of one metropolitan area, the resulting network is known as a wide area
network (WAN).
Figure 1.3 shows an example of a wide area network spanning the continental
United States. In this example, regional offices in San Francisco and New
York are connected with the corporate headquarters, located in Atlanta, via T1
multiplexers and T1 transmission lines operating at 1.544 Mbps. Assuming
that each T1 multiplexer is capable of supporting the direct attachment of
a private branch exchange (PBX), both voice and data are carried by the T1
circuits between the two regional offices and corporate headquarters. The
three T1 circuits can be considered the primary data highway, or backbone,
of the corporate network.
Wide area network example. A WAN uses telecommunications
lines obtained from one or more communications carriers to connect geographically
dispersed locations.
In addition to the three major corporate sites that require the ability to route
voice calls and data between locations, let us assume that the corporation
also has three smaller area offices located in Sacramento, California; Macon,
Georgia; and New Haven, Connecticut. If these locations only require data
terminals to access the corporate network for routing to the computers located
in San Francisco and New York, one possible mechanism to provide network
support is obtained through the use of tail circuits. These tail circuits could
be used to connect a statistical time division multiplexer (STDM) in each area
office, each serving a group of data terminals to the nearest T1 multiplexer,
using either analog or digital circuits. The T1 multiplexer would then be
configured to route data terminal traffic over the corporate backbone portion
of the network to its destination.
Network Characteristics
There are certain characteristics we can associate with wide area networks.
First, theWANis typically designed to connect two or more geographical areas.
This connection is accomplished by the lease of transmission facilities from
one or more communications vendors. Secondly, most WAN transmission
occurs at or under a data rate of 1.544 Mbps or 2.048 Mbps, which are the
operating rates of T1 and E1 transmission facilities.
A third characteristic of WANs concerns the regulation of the transmission
facilities used for their construction. Most, if not all, transmission facilities
marketed by communications carriers are subject to a degree of regulation at
the federal, state, and possibly local government levels. Even though we now
live in an era of deregulation, carriers must seek approval for many offerings
before making new facilities available for use. In addition, although many
of the regulatory controls governing the pricing of services were removed,
the communications market is still not a truly free market. Thus, regulatory
agencies at the federal, state, and local levels still maintain a degree of
control over both the offering and pricing of new services and the pricing of
existing services.
1.2 Local Area Networks
The origin of local area networks can be traced, in part, to IBM terminal equipment
introduced in 1974. At that time, IBM introduced a series of terminal
devices designed for use in transaction-processing applications for banking
and retailing. What was unique about those terminals was their method of connection:
a common cable that formed a loop provided a communications path
within a localized geographical area. Unfortunately, limitations in the data
transfer rate, incompatibility between individual IBM loop systems, and other
problems precluded the widespread adoption of this method of networking.
The economics of media sharing and the ability to provide common access
to a centralized resource were, however, key advantages, and they resulted
in IBM and other vendors investigating the use of different techniques to
provide a localized communications capability between different devices. In
1977, Datapoint Corporation began selling its Attached Resource Computer
Network (ARCNet), considered by most people to be the first commercial local
area networking product. Since then, hundreds of companies have developed
local area networking products, and the installed base of terminal devices
connected to such networks has increased exponentially. They now number
in the hundreds of millions.
Comparison to WANs
Local area networks can be distinguished from wide area networks by geographic
area of coverage, data transmission and error rates, ownership,
government regulation, and data routing—and, in many instances, by the
type of information transmitted over the network.
Subscribe to:
Posts (Atom)