Posts by Teeeeeeeeegle™

September 7, 2005

Virus

A virus is a software routine that is deliberately designed to attach itself to another piece of software on a computer and perform some preprogrammed activity. The worst types of viruses are engineered to irretrievably destroy all or part of the data stored on the computer by wiping out hard drives. However, there are many viruses with effects that are not so catastrophic. Some viruses can cause intermittent problems on the computer, such as system lockups or specific feature failures, whereas others do nothing but display a message programmed by it's author. Antivirus software products must be continually updated to cope with the constantly evolving techniques used by the creators of viruses. Viruses are designed to replicate themselves by infecting other entities, in this case, other pieces of software. If you insert a virus-infected floppy disk into your computer, the virus can migrate from the floppy disk to the computer's hard drive, infecting the code that it finds there in one of several ways. In some cases, viruses are designed to remain dormant until the computer's clock registers a particular date and time. Files transferred from the infected computer to the other systems on the network can spread the infection. Depending on the design of the virus, the effect can range from a nuisance to a catastrophe. Once the network is infected, it can be very difficult to completely remove the virus. If you miss one infected file on one computer, the virus can reassert itself and start spreading all over again. Viruses can attach themselves to various parts of a computer's software, and they are often classified by the area of the disk in which they reside. The most common types of viruses are as follows:

Boot sector viruses
A boot sector virus can come from a floppy disk or an executable file. It infects your computer by inhabiting the master boot record (MBR) of your hard drive. Because the MBR executes whenever you start the computer, the virus is always loaded into memory, and is therefore very dangerous. Unlike a virus that infects files (which you can remove by deleting the file), to remove a boot sector virus, you must either delete and recreate the MBR (which causes the data on the disk to be lost) or use an antivirus program.

Executable file viruses
An executable file virus attaches itself to .exe or .com files or, less often, to other types of application modules, such as .dll and .bin files. The virus is loaded into memory when you run the infected program and can then spread to other software that you execute. You can receive executable file viruses in e-mail attachments and downloads, but they can only infect your computer if you run the infected program.

Polymorphic viruses
A polymorphic virus can reside in both the MBR and in executable files, and is designed to change its signature periodically to fool virus-scanning routines that search for the code associated with particular viruses. The virus modifies itself and uses encryption to hide the majority of it's code. This type of virus is a direct result of the ongoing competition between the people who design viruses and those who design the tools to protect against them.

Stealth viruses
Many virus-scanning products function by detecting changes in the sizes of files stored on a computer's hard drive. Normal viruses add code to executable files, so the files grow in size by a small amount. This is why installing an updated version of an application can sometimes trigger false positive results from a virus scanner. Stealth viruses attach themselves to executable files in the normal way, but they disguise their appearance by subtracting the same number of bytes from the infected file's directory entry that their code added to the file. The end result is that the file appears not to have changed in size, even though virus code has been added to it.

Macro viruses
A more recent innovation in the world of technological delinquency is the macro virus, which can infect data files. It used to be that viruses were only able to infect executables, but data file viruses attach themselves to documents and spread themselves using the application's macro capability. Microsoft Word documents in particular were the original targets for this type of virus. When a user opens an infected document file, the macro code executes, enabling the virus to enter into memory and spread to the template file (NORMAL.DOT) that Word uses for all open documents. Once in the template file, the virus is read into memory whenever the application is launched and it spreads to all of the documents the user loads afterward. Macro viruses don't usually cause severe damage, but because many businesses frequently exchange document files using e-mail and other methods, they spread very rapidly and are difficult to eradicate. Applications with macro capabilities now usually have a switch that lets you disable any macro code found in a document. If you don't use macros, you can protect yourself from virus infections by using this feature.

Worms
A worm is not really a virus, because although it is a program that replicates itself, it does not infect other files. Worms are separate programs that can insinuate themselves into a computer in various ways, such as by inserting an entry in the Run Registry key that causes them to execute whenever the computer starts. Once in memory, worms can create copies of themselves on the same computer or replicate to other computers over a network connection.

Trojan horses
A Trojan horse is not a virus either, because it neither replicates nor infects other files. These are programs that masquerade as other programs, so that the user doesn't suspect that they are running. Once loaded into memory, Trojan horses can perform any number of tasks that can be dangerous to the computer or to the network. Some Trojan horses are essentially remote control server programs that open up a 'back door' into the computer where they are running. A user elsewhere on the network or on the internet can run the client half of the program and access the remote computer through the back door. Other types of Trojan horses can gather information on the remote system, such as passwords or data files, and transmit it to a host program running on another computer.

Preventing virus infections

To protect your network against virus infections, you should implement a series of policies that affect both the behavior of your users and the configuration of their computers. All users should be wary of floppy disks from outside sources and particularly of files attached to e-mail messages. One of the most common techniques for disseminating viruses these days is code that causes the victim's computer to send an e-mail message with an infected attachment to all of the people in the user's address book. Because the recipients recognize the name of the sender, they often open the e-mail and launch the attachment without thinking, thus infecting their own computers and beginning the same e-mail generation process. Antivirus software products can protect individual computers from infection by viruses and other malicious programs arriving on floppy disks, through internet downloads and in attachments. A typical antivirus program consists of a scanner that examines the computer's MBR when the computer starts and checks each file as the computer accesses it. A full-featured program also checks attachments and downloads by intercepting the files as they arrive and by scanning them for viruses before passing them to the client application. A virus scanner works by examining files and searching for specific code signatures that are peculiar to certain viruses. The scanner has a library of virus definitions that it uses to identify viruses. To keep your computers fully protected, you must update the virus signatures for your program on a regular basis. In many cases, antivirus programs have a feature that automatically connects to a server on the internet and downloads the latest signatures when they become available. In a network environment, all of the computers, both servers and workstations, should run an antivirus program so that the entire network is protected. Antivirus programs designed for use on networks do not provide greater protection against viruses, but they simplify the process of implementing the protection. The centralized management and monitoring capabilities in network-enabled antivirus products typically allow you to create policies for the computers on the network that force them to run the virus-scanning mechanisms you specify. They also simplify the process of deploying virus signature updates to all of the computers on the network.

Backups

Backup hardware

You can perform backups using any type of storage device. One objective in developing an effective backup strategy, however, is to automate as much of the process as possible. Although you can back up 1 gigabyte (GB) of data onto 1.44-MB floppy disks, you probably don't want to be the person sitting around feeding 695 disks into a floppy drive. Therefore, you should select a device that is capable of storing all of your data without frequent media changes. This enables you to schedule backup jobs to run unattended. This doesn't mean, however, that you have to purchase a drive that can hold all of the data stored on all of your network's computers. You can be selective about which data you want to back up, so it's important to determine just how much of your data needs protecting before you decide on the capacity of your backup device.
Another important criterion to use when selecting a backup device is the speed at which the drive writes data to the medium. Backup drives are available in many different speeds, and, not surprisingly, the faster ones are generally more expensive. It is typical for backup jobs to run during periods when the network is not otherwise in use. This ensures that all of the data on the network is available for backup. The amount of time that you have to perform your backups is sometimes called the backup window. The backup device that you choose should depend in part on the amount of data you have to protect and the amount of time that you have to back it up. If, for example, you have 10 GB of data to back up and your company closes down from 5:00 P.M. until 9:00 A.M. the next morning, you have a 16-hour backup window - plenty of time to copy your data, using a medium-speed backup device. However, if your company operates three shifts and only leaves you one hour, from 7:00 A.M. to 8:00 A.M., to back up 100 GB of data, you will have to use a much faster device or, in this case, several devices. High-end backup drives can command prices that run into five figures. When you evaluate backup devices, you must be aware of the product's extended costs as well. Backup devices nearly always use a removable medium, such as a tape or disk cartridge. This enables you to store copies of your data off site, such as in a bank's safe deposit vault. If the building where your network is located is destroyed by a fire or other disaster, you still have your data, which you can use to restart operations elsewhere. Therefore, in addition to purchasing the drive, you must purchase storage media as well. Some products might seem at first to be economical because the drive is inexpensive, but in the long run they might not be, because the media are so expensive. One of the most common methods of evaluating various backup devices is to determine the cost per megabyte of the storage it provides. Divide the price of the medium by the number of megabytes it can store, and use this figure to compare the relative cost of various devices. Of course, in some cases you might need to sacrifice economy for speed or capacity.


Magnetic tape drives

The most common hardware device used to back up data is a magnetic tape drive. Unlike hard disk, floppy disk and CD-ROMs, tape drives are not random access devices. This means that you can't simply move the drive heads to a particular file on a backup tape without spooling through all of the files before it. As with other types of tape drives, such as audio and video, the drive unwinds the tape from a spool and pulls it across the heads until it reaches the point in the tape where the data you want is located. As a result, you can't mount a tape drive in a computer's file system, assign it a drive letter, and copy files to it, as you can with a hard disk drive. A special software program is required to address the drive and send the data you select to it for storage. This also means that tape drives are useless for anything other than backups, whereas other media, such as writable CD-ROMs, can be used for other things.

Drive interface

Backup devices can use any of the standard computer interfaces, such as Integrated Drive Electronics (IDE), universal serial bus (USB), and Small Computer Systems Interface (SCSI). SCSI devices operate more independently than those using IDE, which means that the backup process, which often entails reading from one device while writing to another on the same interface, is more efficient. When multiple IDE devices share a channel, only one operates at a time. Each drive must receive, execute, and complete a command before the other drive can receive it's next command. On the other hand, SCSI devices can maintain a queue of commands that they have received from the host adapter and execute them sequentially and independently. Magnetic tape drives, in particular, require a consistent stream of data to write to the tape with maximum effectiveness. If there are constant interruptions in the data stream, as can be the case with the IDE interface, the tape drive must repeatedly stop and start the tape, which reduces it's speed and its overall storage capacity. A SCSI drive can often operate continuously without pausing to wait for the other devices on the channel. A SCSI backup device is always more expensive than a comparable IDE alternative, because the drive requires additional electronics and because you must have a SCSI host adapter installed in the computer. Most SCSI devices are available as internal or external units, the latter of which have their own power supplies, which also adds to the cost. However, the additional expense is worth it for a reliable network backup solution.

CD-ROM

The popularity of writable CD-ROM drives, such as compact disc-recordables (CD-Rs) and compact disc rewritables (CD-RWs), has led to their increasing use as backup devices. Although the capacity of a CD is limited to approximately 650 MB, the low cost of the media makes CDs an economical solution, even if the disks can be used only once, as is the case with CD-Rs. The biggest factor in favour of CD-ROMs for backup is that many computers already have CD-ROM drives installed for other purposes, eliminating the need to purchase a dedicated backup drive.
For network backups, however, CD-ROMs are usually inadequate. Most networks have multiple gigabytes worth of data to back up, which would require many disk changes. In addition, CD-R and CD-RW drives are usually not recognized by network backup software products. Although these drives often come with software that provides its own backup capabilities (intended for relatively small, single-system backups), this software usually does not provide the features needed for backing up a network effectively.


Cartridges

Another storage device commonly found in computers these days that can easily be used for backups is the removable cartridge drive. Products like Iomega's Zip and Jaz drives provide performance that approaches that of a hard disk drive, but they use removable cartridges. These drives mount into a computer's file system, meaning that you can assign them a drive letter and copy files to them just as with a hard drive. Zip cartridges hold only 100 MB or 250 MB, which makes them less practical than CDs for backups. However, Jaz drives are available in 1-GB and 2-GB versions, which is sufficient for a backup device.

Autochangers

In some cases, even the highest capacity drive isn't sufficient to back up a large network with constantly changing data. To create an automated backup solution with a greater capacity than that provided by a single drive, you can purchase a device called an autochanger. An autochanger is a unit that contains one or more drives (usually tape drives, but optical disk and CD-ROM autochangers are also available) and a robotic mechanism that swaps the media in and out of the drives. Sometimes these devices are called jukeboxes or tape libraries. When a backup job fills one tape (or other storage medium), the mechanism extracts it from the drive and inserts another, after which the job continues. The autochanger also retains a memory of which tapes are available, commonly called an index, and can load the appropriate tape to perform a
restore job.


Backup software

Apart from the hardware, the other primary component in a network backup solution is the software that you use to perform the backups. Storage devices designed for use as backup solutions are not treated like the other storage subsystems in a computer; a specialized software product is required to package the data that you want to back up and send it to the drive. Depending on the operating system you're using, you might already have a backup program that you can use with your drive, but in many cases an operating system's own backup program provides only basic functionality and lacks features that can be especially useful in a network environment.

Target selection and filtering

The most basic function of a backup software program is to let you select what you want to back up, which is sometimes called the target. A good backup program enables you to do this in many ways. You can select entire computers to back up, specific drives on those computers, specific directories on the drives, or specific files in specific directories. Most backup programs provide a directory tree display that you can use to select the targets for a backup job.

In most cases, it isn't necessary to back up all of the data on a computer's drives. If a hard drive is completely erased or destroyed, you are likely to have to reinstall the operating system before you can restore files from a backup tape, so it might not be worthwhile to back up all of the operating system files each time you run a backup job. The same is true for applications. You can reinstall an application from the original distribution media, so you might want to back up only your data files and configuration settings for that application. In addition, most operating systems today create temporary files as they run, which you do not need to back up. Windows, for example, creates a temporary file for memory paging that can be several hundred megabytes in size. Because this file is recreated each time you start the computer, you can save space on your backup tapes by omitting files like this from your backup jobs. Individually selecting the files, directories, and drives that you want to back up can be quite tedious, though, so many backup programs provide other ways to specify targets. One common method is to use filters that enable the software to evaluate each file and directory on a drive and decide whether to back it up. A good backup program provides a variety of filters that allow you to select targets based on file and directory names, extensions, sizes, dates and attributes. For example, you can configure the software to back up a computer running Windows 2000 and use filters to exclude PAGEFILE.SYS, which is the memory paging file; the \Temporary Internet Files directories, which contain Microsoft Internet Explorer's browser cache; and all files with a .tmp extension, which are temporary files created by various applications. None of these files are necessary when restoring the system from a backup tape, so it's worthless to save them and they can add up to a significant amount of tape storage space. You can also use filters to limit your backups to only files that have changed recently, using either date or attribute filters. The most common type of filter used by backup programs is the one for the archive attribute, which enables the software to back up only the files that have changed since the last backup. This filter is the basis for incremental and differential backups.

Incremental and differential backups

The most basic type of backup job is a full backup, which copies the entire contents of a computer's drives either to tape or to another medium. You can perform a full backup every day, if you want to, or each time that you back up that particular computer. However, this practice can be wasteful, both in terms of time and tape. When you perform a full backup every day, the majority of the files you are writing to the tape are exactly the same as they were yesterday. The program files that make up the operating system and your applications do not change. The only files that change on a regular basis are your data files and perhaps the files that store configuration data, along with special resources like the Windows Registry and directory service databases. To save on tape and shorten the backup time, many network administrators perform full backups only once a week, or even less frequently. In between the full backups, they perform special types of filtered jobs that back up only the files that have recently been modified. These types of jobs are called incremental backups and differential backups.

An incremental backup is a job that backs up only the files changed since the last backup job of any kind. A differential backup is a job that backs up only the files that have changed since the last full backup. The backup software filters the files for these jobs using a special file attribute called the archive bit, which every file on the computer possesses. File attributes are 1-bit flags, stored with each file on a drive, that perform various functions. For example, the read-only bit, when activated, prevents any application from modifying that particular file, and the hidden bit prevents most applications from displaying that file in a directory listing. The archive bit for a file is activated by any application that modifies that file. When the backup program scans the target drive during an incremental or differential job, it selects for backup only the files with active archive bits. During a full backup, the software backs up the entire contents of a computer's drives, and also resets (that is, removes) the archive bit on all of the files. Immediately after the job is completed, you have a complete copy of the drives on tape, and none of the files on the target drive has an active archive bit. As work on the computer proceeds after the backup job is completed, applications and operating system processes modify various files on the computer, and when they do, they activate the archive bits for those files. The next day, you can run an incremental or differential backup job, which is also configured to back up the entire computer, except that it filters out all files that do not have an active archive bit. This means that all of the program files that make up the operating system and the applications are skipped, along with all data files that have not changed. When compared to a full backup, an incremental or differential backup job is usually much smaller, so it takes less time and less tape.
The difference between an incremental and a differential job lies in the behavior of the backup software when it either resets or does not reset the archive bits of the files it copies to tape. Incremental jobs reset the archive bits, and differential jobs don't. This means that when you run an incremental job, you're only backing up the files that have changed since the last backup, whether it was a full backup or an incremental backup. This uses the least amount of tape, but it also lengthens the restore process. If you should have to restore an entire computer, you must first perform a restore from the last full backup tape, and you must then restore each of the incremental jobs performed since the last full backup. For example, suppose that you run a full backup job on a particular computer every Monday evening and incremental jobs every evening from Tuesday through Friday. If the computer's hard drive fails on a Friday morning, you must restore the previous Monday's full backup, and you must then restore the incremental jobs from Tuesday, Wednesday, and Thursday, in that order. The order of the restore jobs is essential if you want the computer to have the latest version of every file.


Differential jobs do not reset the archive bit on the files they back up. This means that every differential job backs up all of the files that have changed since the last full backup. If you perform a full backup on Monday evening, Tuesday evening's differential job will back up all files changed on Tuesday, Wednesday evening's differential job will back up all files changed on Tuesday and Wednesday, and Thursday evening's differential backup will back up all files changed on Tuesday, Wednesday, and Thursday. Differential backups use more tape, because some of the same files are backed up each day, but differential backups also simplify the restore process. To completely restore the computer that failed on a Friday morning, you only have to restore Monday's full backup tape and the most recent differential backup, which was performed Thursday evening. Because the Thursday tape includes all of the files modified on Tuesday, Wednesday, and Thursday, no other tapes are needed. The archive bits for these changed files are not reset until the next full backup job is performed.

Drive manipulation

When you have selected what you want to back up, the next step is to specify where to send the selected data. The backup software typically enables you to select a backup device (if you have more than one) and prepare to run the job by configuring the drive and the storage medium. For backup to a tape drive, this part of the process can include any of the following tasks:

Formatting a tape
Supplying a name for the tape you're creating
Specifying whether you want to append the backed up files to the tape or overwrite the tape
Turning on the drive's compression feature


Scheduling

All backup products enable you to create a backup job and execute it immediately, but the key to automating a backup routine is being able to schedule jobs to execute unattended. This way, you can configure your backup jobs to run when the office is closed and the network is idle, so that all resources are available for backup and user productivity is not compromised by a sudden surge of network traffic. Not all of the backup programs supplied with operating systems or designed for stand-alone computers support scheduling, but all network backup software products do. Backup programs use various methods to automatically execute backup jobs. The Windows 2000 Backup program uses the operating system's Task Scheduler application, and other programs supply their own program or service that runs continuously and triggers the jobs at the appropriate times. Some of the higher end network backup products can use a directory service, such as Microsoft's Active Directory service or Novell Directory Services (NDS). These programs modify the directory schema (the code that specifies the types of objects that can exist in the directory) to create an object representing a queue of jobs waiting to be executed.

No matter which mechanism the backup software uses to launch jobs, the process of scheduling them is usually the same. You specify whether you want to execute the job once or repeatedly at a specified time each day, week, or month. The idea of the scheduling feature is for the network administrator to create a logical sequence of backup jobs that execute by themselves at regular intervals. After this is done, the only thing that remains to be done is changing the tape in the drive each day. If you have an autochanger, you can even eliminate this part of the job and create a backup job sequence that runs for weeks or months without any attention at all.

Fault tolerance

Many organizations must have their data available all the time to function. If a drive on a server fails, the data should be restorable from a backup, but the time lost replacing the drive and restoring the data can mean lost productivity that costs the company dearly. To provide a higher degree of data availability, there are a variety of hardware technologies that work in different ways to ensure that network data is continuously accessible. Some of these technologies are
as follows:


Mirroring
Disk mirroring is an arrangement in which two identical hard disk drives connected to a single host adapter always contain identical data. The two drives appear to users as one logical drive, and whenever anyone saves data to the mirror set, the computer writes it to both drives simultaneously. If one hard drive unit should fail, the other can take over immediately until the malfunctioning drive is replaced. Many operating systems, including Microsoft Windows 2000, Microsoft Windows NT, and Novell NetWare, support disk mirroring. The two main drawbacks of this technique are that the server provides only half of its available disk space to users and that although mirroring protects against a drive failure, a failure of the host adapter or the computer can still render the data unavailable.

Duplexing
Disk duplexing provides a higher degree of data availability by using duplicate host adapters as well as disk drives. Identical disk drives on separate host adapters maintain exact copies of the same data, creating a single logical drive, just as in disk mirroring, but in this case, the server can survive either a disk failure or a host adapter failure and still make its data available
to users.


Volumes

A volume is a fixed amount of data storage space on a hard disk or other storage device. On a typical computer, the hard disk drive may be broken up into multiple volumes to separate data into discrete storage units. For example, if you have a C and a D drive on your computer, these two letters can refer to two different hard drives or to two volumes on a single drive. Network servers function in the same way, but with greater flexibility. You can create multiple volumes on a single drive or create a single volume out of multiple drives. This latter technique is called drive spanning. You can use drive spanning to make all the storage space on multiple drives in a server appear to users as a single entity. The drawback of this technique is that if one of the hard drives containing part of the volume fails, the whole volume is lost.

Striping
Disk striping is a method by which you create a single volume by combining the storage on two or more drives and writing data alternately to each one. Normally, a spanned volume stores whole files on each disk. When you use disk striping, the computer splits each file into multiple segments and writes alternate segments to each disk. This speeds up data access by enabling one drive to read a segment while the other drive's heads are moving to the next segment. When you consider that network servers might need to process dozens of file access requests at once (from various users), the speed improvement provided by disk striping can be significant. However, striped volumes are subject to the same problem as volumes that are spanned. If one drive in the stripe set fails, the entire volume is lost.

Redundant array of independent disks (RAID)
This is a comprehensive data availability technology with various levels that provide all of the functions described in the technologies previously listed. Higher RAID levels store error correction information along with the data, so that even if a drive in a RAID array fails, its data still remains available from the other drives. Although RAID is available as a software product that works with standard disk drives, many high-end servers use dedicated RAID drive arrays, which consist of multiple hard drive units in a single housing, often with hot swap capability. Hot swapping is when you can remove and replace a malfunctioning drive without shutting off the other drives in the array. This enables the data to remain continuously available to network users, even when the support staff is dealing with a drive failure.

Network attached storage (NAS)

This technology uses a dedicated storage appliance that connects directly to the network and contains its own embedded operating system. Essentially a multiplatform file server, computers on the network can access the NAS appliance in a variety of ways.

Storage Area Networks (SANs)
A SAN is a separate network installed at a local area network (LAN) site that connects servers to disk arrays and other network storage devices, making it possible to use dedicated storage hardware arrays without overloading the client network with storage-related traffic. SANs typically use the Fibre Channel protocol to communicate, but they can theoretically use any network medium and protocol.


RAID Level
RAID Technology
Description


0
Disk striping
Enhances performance by writing data to multiple disk drives, one block at a time; provides no fault tolerance.


1
Disk mirroring and duplexing
Provides fault tolerance by maintaining duplicate copies of all data on two drives. Disk mirroring uses two drives connected to the same host adapter, and disk duplexing uses two drives connected to different host adapters.


2
Hamming error-correcting code (ECC)
Ensures data integrity by writing error-correcting code to a separate disk drive;
rarely implemented.


3
Parallel transfer with shared parity
Provides fault tolerance by striping data at the byte level across a minimum of two drives and storing parity information on a third drive. If one of the data drives fails, its data can be restored using the parity information.


4
Independent data disks with shared parity
Identical to RAID 3, except that the data is striped across the drives at the block level.


5
Independent data disks with distributed parity
Provides fault tolerance by striping both data and parity across three or more drives, instead of using a dedicated parity drive, as in RAID 3 and RAID 4.


6
Independent disks with two-dimensional parity
Provides additional fault tolerance by striping data and two complete copies of the parity information across three or more drives.


7
Asynchronous RAID
Proprietary hardware solution that consists of a striped data array and a separate parity drive, plus a dedicated operating system that coordinates the disk storage activities.


10
Striping of mirrored disks
Combines RAID 0 and RAID 1 by striping data across mirrored pairs of disks, thus providing both fault tolerance and enhanced performance.


53
Striped array of arrays
Stripes data across multiple RAID 5 arrays, providing the same fault tolerance as RAID 5 with additional performance enhancement.


0+1
Mirroring of striped disks
Combines RAID 0 and RAID 1 in a different manner by mirroring the data stored on identical striped disk arrays.


Server availability

Data availability techniques are useful, but they do no good if the server running the disks malfunctions for some other reason. In addition to specialized data availability techniques, there are similar technologies designed to make servers more reliable. For example, some servers take the concept of hot swapping to the next level by providing redundant components, such as fan assemblies and various types of drives, that you can remove and replace without shutting down the entire computer. Of course the ultimate solution for server fault tolerance is to have more than one server and there are various solutions available that enable multiple computers to operate as one, so that if one server should fail, another can immediately take it's place. Novell NetWare SFT III is one of the first commercially successful server duplication technologies. NetWare SFT III is a version of NetWare that consists of two copies of the network operating system, plus a proprietary hardware connection that is used to link the two separate server computers. The servers run an application that synchronizes their activities. When a user saves data to one server volume, for example, the data is written to both servers at the same time. If one of the servers should malfunction for any reason, the other server instantaneously takes
it's place.


SFT III is designed solely to provide fault tolerance, but the next generation of this technology does more. Clustering is a technique for interconnecting multiple computers to form a unified computing resource. In addition to providing fault tolerance, a cluster can also distribute the processing load for specific tasks among the various computers or balance the processing load by allocating client requests to different computers in turn. To increase the speed and efficiency of the cluster, administrators can simply connect another computer to the group, which adds its capabilities to those of the others. Both Microsoft and Novell support clustering, Microsoft with Windows 2000 Advanced Server or Microsoft Windows NT 4.0 Enterprise Edition and Novell with NetWare Cluster Services for NetWare 5.1.

Network redundancy

Service interruptions on a network are not always the result of a computer or drive failure. Sometimes the network itself is to blame. For this reason, many larger internetworks are designed to include redundant components that enable traffic to reach a given destination in more than one way. If a network cable is cut or broken, or if a router or switch fails, redundant equipment enables data to take another path to its destination. There are several ways to provide redundant paths. Typically, you have at least two routers or switches connected to each network, so that the computers can use either one as a gateway to the other segments. For example, you can build an internetwork with two backbones. Each workstation can use either of the routers on its local segment as a gateway. You can also use this arrangement to balance the traffic on the two backbones by configuring half of the computers on each LAN to use one of the routers as their default gateway and the other half to use the other router.

Security protocols

IPSec

IPSec is the colloquial term used to describe a series of draft standards published by the Internet Engineering Task Force (IETF) that define a methodology for securing data as it is transmitted over a LAN using authentication and encryption. Most of the security protocols that provide encryption of data transmitted over a network are designed for use on the Internet or for specialized traffic between specific types of clients and servers. Until IPSec, there was no standard to protect data as it was transmitted over the LAN. You could control access to LAN resources using permissions and passwords, but the actual data as it traveled over the network medium was open to interception. IPSec actually consists of two separate protocols that provide different levels of security protection. The IP Authentication Header (AH) protocol provides authentication and guaranteed integrity of IP datagrams, and the IP Encapsulating Security Payload (ESP) protocol provides datagram encryption. Using the two protocols together provides the best possible security IPSec has to offer. The Authentication Header protocol adds an extra header to the datagrams generated by the transmitting computer right after the IP header. When you use AH, the Protocol field in the IP header identifies to the AH protocol, instead of the transport layer protocol contained in the datagram. The AH header also contains a sequence number that prevents unauthorized computers from replying to a message and an integrity check value (ICV) that the receiving computer uses to verify that incoming packets have not been altered. Encapsulating Security Payload works by encapsulating the transport layer data in each datagram using its own header and trailer, and by encrypting all of the data following the ESP header. The ESP frame also contains a sequence number and an ICV. To use IPSec on a LAN, both the transmitting and receiving systems must have support for the protocols. However, because all of the information that IPSec adds to packets appears inside the datagram, intermediate systems such as routers do not have to support the protocols. Many of the major network operating systems are implementing IPSec in their latest versions, including Windows 2000 and various forms of UNIX. On a computer running Windows 2000, you configure the TCP/IP client to use IPSec in the Options tab of the Advanced TCP/IP properties dialog box. After selecting IP Security and clicking Properties, you see an IP Security dialog box. After selecting the Use This IP Security Policy option, you can choose from the following policies:

Client (Respond Only)
This policy configures the computer to use IPSec only when another computer requests it.

Secure Server (Require Security)
This policy configures the computer to require IPSec for all communications. Attempts to connect by computers that do not support IPSec are denied.

Server (Request Security)
This policy configures the computer to request the use of IPSec for all communications, but to allow connections without IPSec when the other computer doesn't support it.

The IPSec functionality described in the previous section refers to transport mode operation, in which the upper layer data carried inside a datagram is protected by authentication or encryption. IPSec is also capable of operating in tunnel mode, however, which is intended for gateway-to-gateway communications, such as those used in virtual private networks (VPNs). When two computers establish a VPN link across the internet, the transmitting computer that originally generated the packet sends a normal datagram to a gateway (or router) that provides access to the internet. The gateway, operating in tunnel mode, then encapsulates each entire datagram (including the IP header) within another datagram, and the entire construction is encrypted and authenticated using IPSec. This outer datagram functions as an encrypting 'tunnel' through which the upper layer data travels in complete safety. After passing through the internet and on reaching the gateway leading to the destination computer, the outer datagram is stripped away, and the data inside is authenticated and decrypted. The gateway can then forward the original (unencrypted) datagram to the destination end system. Thus, for this type of communication, the end systems involved in the transaction do not even have to support IPSec.

L2TP

IPSec can operate in tunnel mode independently or in cooperation with L2TP. This protocol was derived from the Cisco Systems Layer 2 Forwarding protocol and the Microsoft Point-to-Point Tunneling Protocol (PPTP) and is now defined by an IETF document. Layer 2 Tunneling Protocol creates a tunnel by encapsulating Point-to-Point Protocol (PPP) frames inside UDP packets. Even if the PPP frame contains connection-oriented TCP data, it can be carried inside a connectionless UDP datagram. In fact, the PPP frame can even contain Internetwork Packet Exchange (IPX) or NetBIOS Enhanced User Interface (NetBEUI) data. Layer 2 Tunneling Protocol has no encryption capabilities of its own. It's possible to create a tunnel without encrypting the data inside it, but this is hardly worth the trouble. This protocol uses the IPSec ESP protocol to encapsulate and encrypt the entire UDP datagram containing the PPP frame. Thus, by the time the data is transmitted over the network, each packet consists of the original upper layer application data encapsulated within a PPP frame, which is in turn encapsulated by an L2TP frame, a UDP datagram, an ESP frame, an IP datagram, and finally another PPP frame, at which point the packet is finally ready for transmission over the network.

SSL

Secure Sockets Layer is a special-purpose security protocol that is designed to protect the data transmitted between Web servers and their client browsers. Virtually all of the Web servers and browsers available today support SSL. When you access a secured site on the Internet, to purchase a product using a credit card, for example, your browser is probably using SSL to communicate with the server. If your browser displays the protocol heading https:// in its address field instead of http://, then you are connecting to a secured site. Like IPSec, SSL provides authentication and encryption services. Authentication is performed by the SSL Handshake Protocol (SSLHP), which also negotiates the method to be used to encrypt the data. The SSL Record Protocol (SSLRP) packages the data in preparation for its encryption. When a Web browser connects to a secured server, the server transmits a digital certificate to the client that it has obtained from a third-party certificate authority (CA). The client then uses the CA's public key, which is part of its SSL implementation, to extract the server's public key from the certificate. Once the browser is in possession of the server's public key, it can decipher the encrypted data sent to it by that server.

Kerberos

Kerberos is an authentication protocol that is typically used by directory services, such as Active Directory, to provide users with a single network logon capability. Once a server running Kerberos authenticates a client (called an authentication server), that client is granted the credentials needed to access resources anywhere on the network. Kerberos was developed at the Massachusetts Institute of Technology and is now standardized by the IETF. Windows 2000 and other operating systems rely heavily on Kerberos to secure their client/server network exchanges. When a client logs on to a network that uses Kerberos, it sends a request message to an authentication server, which is already in possession of the account name and password associated with that client. The authentication server responds by sending a ticket-granting ticket (TGT) to the client, which is encrypted using a key based on the client's password. Once the client receives the TGT, it prompts the user for the password and uses it to decrypt the TGT. Because only that user (presumably) has the password, this process serves as an authentication. Now that the client is in possession of the TGT, it can access network resources by sending a request to a ticket-granting server (TGS), which may or may not be the same as the authentication server, containing an encrypted copy of the TGT. The TGS, on decrypting the TGT and verifying the user's status, creates a server ticket and transmits it to the client. The server ticket enables a specific client to access a specific server for a limited length of time. The ticket also includes a session key, which the client and the server can use to encrypt the data transmitted between them, if necessary. The client transmits the server ticket (which was encrypted by the TGS using a key that the server already possesses) to that server, which, on decrypting it, grants the client access to the desired resource.

Firewalls

A firewall is essentially a barrier between two networks that evaluates all incoming or outgoing traffic to determine whether or not it should be permitted to pass to the other network. A firewall can take many different forms and use different criteria to evaluate the network traffic it receives. Some firewalls are dedicated hardware devices, essentially routers with additional software that monitors incoming and outgoing traffic. In other cases, firewalls are software products that run on a standard computer. At one time, all firewalls were complex, extremely expensive, and used only in professional network installations. These high-end products still exist, but today you can also purchase inexpensive firewall software products designed to protect a small network or even an individual computer from unauthorized access through an internet connection. There are several methods that firewalls can use to examine network traffic and detect potential threats. Most firewall products use more than one of these methods and often provide other services as well. For example, one firewall product - a proxy server - not only enables users to access Web pages with complete safety, but also can cache frequently used pages for quicker retrieval by other systems.

Packet filtering

A packet filter is the most basic type of firewall, one that examines packets arriving over its interfaces and decides whether to allow them access to the other network based on the information found in the various protocol headers used to construct the packets. Packet filtering can occur at any one of several layers of the Open Systems Interconnection (OSI) reference model. A firewall can filter packets based on any of the following characteristics:

Hardware addresses
Packet filtering based on hardware addresses enables only certain computers to transmit data to the other network. This type of filtering isn't usually used to protect networks from unauthorized Internet access, but you can use this technique in an internal firewall to permit only specific computers to access a particular network.

IP addresses
You can use IP address filtering to permit only traffic destined to or originating from specific addresses to pass through to the other network. If, for example, you have a public Web server on your network, you can configure a firewall to admit only the Internet traffic that is destined for that server's IP address. This can prevent Internet users from accessing any of the other computers on the network.

Protocol identifiers

Firewalls can filter packets based on the protocol that generated the information carried within an IP datagram, such as the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), or the Internet Control Message Protocol (ICMP).

Port numbers
Firewalls can filter packets based on the source or destination port number specified in a packet's transport layer protocol header. This is called service-dependent filtering. These port numbers identify the application or service that generated the packet or for which the packet is destined. For example, you can configure a firewall to permit network users to access the internet using ports 110 and 25 (the well-known port numbers used for incoming and outgoing e-mail) but deny them Internet access using port 80 (the port number used to access Web servers).

The strength of the protection provided by packet filtering is its ability to combine the various types of filters. For example, you might want to permit Telnet traffic into your network from the Internet, so that network support personnel can remotely administer certain computers. However, leaving port 23 (the Telnet port) open to all Internet users is a potentially disastrous security breach. Therefore, you can combine the port number filter with an IP address filter to permit only certain computers (those of the network administrators) to access the network using the Telnet port. The main drawback of packet filtering is that it requires a detailed understanding of TCP/IP communications and the ways of the criminal mind. Using packet filters to protect your network means participating in an ongoing battle of wits with those who would infiltrate your network. Potential intruders are constantly inventing new techniques to defeat standard packet filter configurations, and you must be ready to modify your filters to counteract these techniques.

NAT

Network address translation is a network layer technique that protects the computers on your network from Internet intruders by masking their IP addresses. If you connect a network to the Internet without firewall protection of any kind, you must use registered IP addresses for your computers so that they can communicate with other computers on internet. However, registered IP addresses are, by definition, visible from internet. This means that any user on the Internet can conceivably access your network's computers and, with a little ingenuity, access any resource. The results can be disastrous. Network address translation prevents this from happening by enabling you to assign unregistered IP addresses to your computers. These addresses fall into a range of addresses specifically designated for use on private networks. These addresses are not registered to any internet user, and are therefore not visible from the Internet, so you can safely deploy them on your network without limiting your users' access to internet sites. After you assign these private IP addresses to the computers on your network, outside users can't see your computers from internet. This means that an internet server can't send packets to your network, so your users can send traffic to internet but can't receive it.
To make normal internet communications possible, the router that provides internet access can use NAT. For example, when one of the computers on your network attempts to access an Internet server using a Web browser, the Hypertext Transfer Protocol (HTTP) request packet it generates contains its own private IP address in the IP header's Source IP Address field. When this packet reaches the router, the NAT software substitutes its own registered IP address for the client computer's private address and sends the packet on to the designated server. When the server responds, it addresses its reply to the NAT router's IP address. The router then inserts the original client's private address into the Destination IP Address field and sends the packet on to the client system. All of the packets to and from the computers on the private network are processed in this manner, using the NAT router as an intermediary between the private network and internet. Because only the router's registered IP address is visible to internet, it is the only computer that is vulnerable to attack. A popular security solution, NAT is implemented in numerous firewall products, ranging from high-end routers used on large corporate networks to inexpensive internet connection-sharing solutions designed for home and small business networks. In fact, the Internet Connection Sharing (ICS) feature is based on the principle of NAT.


Proxy servers

Proxy servers are software products similar to NAT routers, except that they function at the application layer of the OSI reference model. Like a NAT router, a proxy server acts as an intermediary between the clients on a private network and internet resources they want to access. The clients send their requests to the proxy server, which sends a duplicate request to the desired internet server. The internet server replies to the proxy server, which relays the response to the client. This effectively renders the private network invisible to the internet and also provides other features. Proxy servers can cache the information they receive from the internet, so that if another client requests the same information, the proxy can supply it immediately from its cache instead of issuing another request to the internet server. Administrators can also configure proxy servers to filter the traffic they receive, blocking users on the private network from accessing certain services. For example, you can configure most Web proxy servers to permit user access only to specific Web sites. The main problem with proxy servers is that you have to configure applications to use them. A NAT router provides protection to the network computers while remaining essentially invisible to them, but the process of configuring a client computer to use proxies for a variety of applications can be time-consuming. However, some proxy clients and servers now have automatic detection capabilities that enable a client application to discover the proxy servers on the network and use them.

Proxy servers are the preferred solution when you want to impose greater restrictions on your users' internet access, such as limiting the applications they can use to access the internet and the sites that they are permitted to visit. Network address translation provides more general internet access without any unusual client configuration, and still provides a similar
degree of protection.

Security models

On a client/server network, the user accounts are stored in a central location. A user logs on to the network from a computer that transmits the user name and password to a server, which either grants or denies access to the network. Depending on the operating system, the account information can be stored in a centralized directory service or on individual servers.
A directory service, such as Active Directory or Novell Directory Services, provides authentication services for an entire network. A user logs on once and the directory service grants access to shared resources anywhere on the network. On a peer-to-peer network, each computer maintains it's own security information and performs its own authentications. Computers on this type of network can function as both clients and servers. When a computer functioning as a client attempts to use resources (called shares) on another computer that is functioning as a server, the server itself authenticates the client before granting it access.


User-level security

The user-level security model is based on individual accounts created for specific users. When you want to grant users permission to access resources on a specific computer, you select them from a list of user accounts and specify the permissions you want to grant them. Windows 2000 and Windows NT always use user-level security whether they are operating in client/server or peer-to-peer mode. In peer-to-peer mode, each computer has its own user accounts. When users log on to their computers, they are authenticated against an account on that system. If several people use the same computer, they must each have their own user account (or share a single account). When users elsewhere on the network attempt to access server resources on that computer, they are also authenticated using the accounts on the computer that hosts the resources. User-level, peer-to-peer security model is suitable only for relatively small networks because users must have separate accounts on every computer they want to access. If users want to change their account passwords, they must change them on every computer on which they have an account. In many cases, users maintain the accounts on their computers themselves because it would be impractical for an administrator to travel to each computer and create a new account whenever a new user is added. User-level security on a client/server network is easier to administer and can support networks of almost any size. In the user-level, client/server security model, administrators create user accounts in a directory service, such as Active Directory in Windows 2000 or a NT domain. When users log on to their computers, they are actually being authenticated by the directory service. The computer sends the account name and password supplied by the user to a domain controller where the directory service information is stored. The domain controller then checks the credentials and indicates to the computer whether the authentication has succeeded or failed. In the same way, when you want to grant other network users access to resources on your computer, you select their user accounts from a list provided by the domain controller. When they try to connect to your computer, the domain controller authenticates them and either grants or denies them access. With only a single set of user accounts stored in a centralized directory service, administrators and users can make changes more easily. Changing a password, for example, is simply a matter of making the change in one directory service record and then the modification is automatically replicated
throughout the network.


Share-level security

Windows Me, 98, and 95 cannot maintain their own user accounts. These operating systems can employ user-level security only when they are participating in an Active Directory or NT domain, using a list of accounts supplied by a domain controller. In peer-to-peer mode, they operate using share-level security. In share-level security, users assign passwords to the individual shares they create on their computers. When network users want to access a share on another computer, they must supply the appropriate password. The share passwords are stored on the individual computers, and in the case of shared drives, users can specify two different passwords to provide both read-only access and full control of the share.

Share-level security is not as flexible as user-level security and it does not provide as much protection. Because everyone uses the same password to access a shared resource, it is difficult to keep the passwords secure. Changing a password means informing everyone who might have to use that resource. In addition, the access control provided by this security model is not as granular as that of user-level control, which you can use to grant users highly specific sets of access permissions to network resources. The advantage of share-level security is that even unsophisticated users can learn to set up and maintain their own share passwords, eliminating the need for constant attention from a network administrator.

WAN technologies

Wide area networking is a variation on the remote network access concepts introduced below. Technically speaking, a computer accessing the Internet with a modem and a PSTN line is using a WAN connection, but the term WAN is more commonly employed when referring to connections between two networks at different locations. For example, a company with branch offices located in several cities might maintain individual LANs in each branch, all of which are connected by WAN links.

Leased lines

A leased line is a permanent telephone connection between two locations that provides a predetermined amount of bandwidth at all times. Leased lines can be analog or digital, although most of the lines used today are digital. The most common leased line configuration in the United States is called a T1, which runs at 1.544 Mbps. The European equivalent of a T1 is called an E1, which runs at 2.048 Mbps. Many organizations use T1s to connect their networks to internet or to connect remote networks. For applications requiring more bandwidth, a T3 connection runs at 44.736 Mbps and an E3 runs at 34.368 Mbps. Leased line services are split into 64-Kbps channels. A T1, for example, consists of 24 channels that can be used as a single data pipe or as individual 64-Kbps links. It's also possible to install a leased line that uses part of a T1. This fractional T1 service enables you to specify exactly the amount of bandwidth you need. For data transmission purposes, a leased line is typically left as a single channel utilizing all of the available bandwidth. However, T1s and other leased line services are used for standard telephone communications as well. When a large organization installs its own telephone system, the PBX or switchboard is connected to one or more T1 lines, split into the 64-Kbps channels, each of which is capable of functioning as one voice telephone line. The PBX allocates the channels to the various users of the telephone system as needed. A T3 connection is the equivalent of 672 channels of 64 Kbps each, or 28 T1s. This much bandwidth is usually required only by ISPs and other service providers with a need for huge amounts of bandwidth. To install a leased line, you contract with a telephone provider to furnish a link between two specific sites, running at a particular bandwidth. At each end of the connection, you must have a device called a channel service unit/data service unit (CSU/DSU), which functions as the terminus for the link and provides testing and diagnostic capabilities. To use the line, you connect the CSU/DSU to your network using a router, in the case of a data network, or a PBX, in the case of a telephone network. Leased lines are a popular WAN solution, but they do have some significant drawbacks. Because the link is permanently connected, you are paying for a specific amount of bandwidth 24 hours a day. If your applications are not running around the clock, you might end up paying premium prices for bandwidth you're not using. Also, the bandwidth of a leased line is capped at a particular rate. If your bandwidth needs ever exceed the capacity of the line, the only way to augment your connection is to install another line. As a result, leased lines are excellent solutions for some applications but can be less cost-effective for others.

Frame relay

Frame relay is a WAN solution that provides bandwidth similar to that of a leased line, but with greater flexibility. Frame relay services range from 56 Kbps all the way up to T3 speeds, but you're not permanently locked into a specific transmission rate, as you are with a leased line. When you enter into a contract with a frame relay provider, you agree on a specific amount of bandwidth called the committed information rate (CIR), which is the base speed of your link. However, the frame relay service can also furnish you with additional bandwidth (called bursts) during your high-traffic periods by borrowing it from other circuits that are not operating at full capacity. In addition to the CIR, you also negotiate a committed burst information rate (CBIR), which is the maximum amount of bandwidth that the provider agrees to furnish during burst periods. Your contract also specifies the duration of the bursts you are permitted. If you exceed the bandwidth agreed on, extra charges are levied. A frame relay connection is not a permanent link between two points, like a leased line. Instead, each of the two sites is connected to the service provider's network, usually using a standard leased line. The provider's network takes the form of a frame relay cloud, which enables the leased line at one site to be dynamically connected to that at the other site. Because each of the sites uses a local telephone provider for its leased line to the cloud, the cost is generally less than it would be to have a single leased line connecting the two different sites. The hardware device that provides the interface between the LAN at each site and the connection to the cloud is called a frame relay assembler/disassembler (FRAD). A FRAD is a network layer device that strips off the LAN's data-link layer protocol header from each packet and repackages it for transmission through the cloud. One of the main advantages of frame relay is that you can use a single connection to a frame relay provider to replace several dedicated leased lines. For example, if a corporation has 5 offices located in different cities, it would take 10 leased lines to connect each office to every other office. With frame relay, you only need a single leased line running from each office to the cloud, and the service can provide separate virtual circuits through the cloud, interconnecting all of the offices.

SONET/Synchronous Digital Hierarchy

The Synchronous Optical Network (SONET) is a physical layer standard that defines a method for building a synchronous telecommunications network based on fiber optic cables. First ratified by the American National Standards Institute (ANSI), SONET was then adapted by the International Telecommunications Union (ITU), which called it the Synchronous Digital Hierarchy (SDH). Intended as a replacement for the T-carrier and E-carrier services used in the United States and Europe, respectively, SONET provides connections at various optical carrier (OC) levels running at different speeds. The idea behind SONET is to create a standardized series of transmission rates and formats, eliminating the problems that currently affect connections between different types of carrier networks.

Asynchronous Transfer Mode

ATM is a protocol that was originally designed to carry voice, data, and video traffic both on LANs and WANs. Today, ATM is sometimes used for network backbones, but it is more commonly found in WAN connections. Unlike most data-link layer protocols, ATM uses fixed-length, 53-byte frames (called cells) and provides a connection-oriented, full-duplex, point-to-point service between devices. There are no broadcast transmissions, and data is relayed between networks by switches, not routers. ATM speeds range from a 25.6-Mbps service, intended for desktop LAN connections, to 2.46 Gbps. Physical media include standard multimode fiber optic and unshielded twisted pair (UTP) cables on LANs, and SONET or T-carrier services for WAN connections. On an internetwork where ATM is implemented on both the LANs and the WAN connections, cells originating at a workstation can travel all the way to a destination at another site through switches without having to be reencapsulated in a different data-link layer protocol. ATM never gained popularity on the desktop, however, because at the time of its introduction, Fast Ethernet provided better transmission rates and a simpler upgrade procedure. In the same way, Gigabit Ethernet is becoming the predominant high-speed backbone protocol. Today, therefore, ATM has largely been relegated to use on WANs.

Fiber Distributed Data Interface

FDDI is unusual in that it is essentially a LAN protocol, but it is also sometimes grouped with WAN technologies. FDDI runs at 100 Mbps and uses token passing on a shared network medium, which puts it into the LAN protocol category. However, because FDDI uses fiber optic cable, it can span much longer distances than traditional copper-based networks. While FDDI cannot provide truly long distance links, as leased lines and other WAN technologies can, you can use it to connect LANs located in nearby buildings, forming a campus internetwork.

SLIP and PPP

The Serial Line Internet Protocol (SLIP) and the Point-to-Point Protocol (PPP) are also data-link layer protocols, but they are very different from Ethernet, Token Ring and Fiber Distributed Data Interface (FDDI). SLIP and PPP, which are part of the TCP/IP protocol suite, are not designed to connect systems to a LAN that uses a shared network medium. Instead they connect one system to another using a dedicated connection, such as a telephone line. For this reason, SLIP and PPP are called end-to-end protocols. Because the medium isn't shared, there is no contention and no need for a Media Access Control (MAC) mechanism, and because there are only two systems involved, there is no need to address the packets. As a result, these protocols are far simpler than Ethernet and Token Ring protocols. SLIP and PPP also do not include physical layer specifications; they operate strictly at the data-link layer. Another standard, such as the RS-232 specification, which defines the nature of the serial port that you use to connect a modem to your computer, provides the physical layer.

SLIP is so simple it hardly deserves to be called a protocol. It is designed to transmit signals over a serial connection (which in most cases means a modem and a telephone line) and has very low control overhead, meaning that it doesn't add much information to the network layer data that it is transmitting. Compared to the 18 bytes that Ethernet adds to every packet, for example, SLIP adds only 1 byte. Of course, with only 1 byte of overhead, SLIP can't provide functions like error detection, network layer protocol identification, security, or anything else.
SLIP works by transmitting an IP datagram received from the network layer and following it with a single framing byte called an End Delimiter. This byte informs the receiving system when it has finished receiving the data portion of the packet. In some cases, the system surrounds the datagram with two End Delimiter fields, making it possible for the receiving system to easily ignore any line noise that occurs outside of the frame. Because of its limited capabilities, SLIP is rarely used today, having been replaced, for the most part, by PPP.


PPP is, in most cases, the protocol you use when you access the Internet by establishing a dial-up connection to an ISP. Many other WAN technologies use it as well. PPP is more complex than SLIP and is designed to provide a number of services that SLIP lacks. These include the ability of the systems to exchange IP addresses, carry data generated by multiple network layer protocols (which is called multiplexing), and support different authentication protocols. Still, PPP does all this using only a 5-byte header, which is larger than the SLIP header, but still less than half the size of the Ethernet frame.

The functions of the fields in the PPP frame are as follows:

Flag (1 byte)
This field indicates the transmission of a packet is about to begin.

Address (1 byte)
This field contains a value indicating that the packet is addressed to all recipients.

Control (1 byte). This field contains a code indicating that the frame contains an unnumbered information packet.

Protocol (2 bytes)
This field identifies the protocol that generated the information found in the Data field.

Data and Pad (up to 1500 bytes)
This field contains information generated by the protocol identified in the Protocol field, plus padding if necessary.

Frame Check Sequence (2 or 4 bytes)
This field contains a checksum value that the receiving system will use for error detection.

Flag (1 byte)
This field indicates that the transmission of the packet has been completed.

Establishing a PPP connection

As small as it is, the PPP frame can't possibly provide all of the functions listed earlier. Instead, the protocol performs many of these functions by performing an elaborate connection establishment procedure when the two systems involved first communicate. This method is more efficient than increasing the size of the PPP header, because there's no need to include this additional information in every packet. For example, it's beneficial for the two communicating systems to know each other's IP addresses, but there's no need to include address fields in every packet header, as in Ethernet, because there are only two computers involved and they only have to identify themselves once. The same is true for functions like user authentication. The PPP connection establishment procedure consists of the following phases that occur before the systems exchange any application data.

Link dead
The two computers begin with no communication, until one of the two initiates a physical layer connection, such as running a program that causes the modem to dial.

Link establishment
Once the physical layer connection is established, one computer generates a PPP frame containing a Link Control Protocol (LCP) request message. The computers use the LCP to negotiate the parameters they will employ during the rest of the PPP session. The message contains a list of options, such as the use of a specific authentication protocol, link quality protocol, header compression, network layer protocols, and so on. The receiving system can then acknowledge the use of these options or deny them and propose a list of its own. Eventually, the two systems agree on a list of options they have in common.

Authentication
If the two systems have agreed to the use of a particular authentication protocol during the link establishment phase, they then exchange PPP frames containing messages specific to that protocol in the Data field. PPP computers commonly use the Password Authentication Protocol (PAP) or the Challenge Handshake Authentication Protocol (CHAP), but there are also other authentication protocols.

Link quality monitoring
If the two computers have negotiated the use of a link quality monitoring protocol during the link establishment phase, the exchange of messages for that protocol occurs here.
Network layer protocol configuration. For each of the network layer protocols that the computers have agreed to use, a separate exchange of Network Control Protocol (NCP) messages occurs at this point.


Link open
Once the NCP negotiations are complete, the PPP connection is fully established and the exchange of packets containing network layer application data can commence.

Link termination
When the two computers have finished communicating, they sever the PPP connection by exchanging LCP termination messages, after which the systems return to the link dead phase.

Point-to-Point Protocol over Ethernet

PPPoE is a TCP/IP standard that defines a methodology for creating individual PPP connections between computers on an Ethernet LAN and external services connected to the LAN using a broadband device such as a cable or DSL modem. Broadband remote network access devices can easily support multiple computers, and Ethernet is the most common protocol used to network the computers together and connect them to the broadband device. However, a shared Ethernet LAN does not enable each computer to access remote services using individual parameters for functions such as access control and billing. The object of PPPoE is to blend the simplicity of connecting multiple computers to a remote network using an Ethernet LAN and broadband technology, while making it possible to establish a separate PPP connection between each computer and a given remote service, complete with all of the PPP components, such as LCP negotiation, authentication, and NCP configuration.

Remote connections

Public Switched Telephone Network

PSTN is just a technical name for Plain Old Telephone Service (POTS). This is the standard voice telephone system, found all over the world, which you can use with asynchronous modems to transmit data between computers at virtually any location. The PSTN service in your home or office probably uses copper-based twisted pair cable, as do most LANs, and RJ-11 jacks, which are the same as the RJ-45 jacks used on twisted pair LANs, except that RJ-11 jacks have four (or sometimes six) electrical contacts instead of eight. The PSTN connection leads to a central office belonging to the telephone company, which can route calls from there to any other telephone in the world. Unlike a LAN, which is digital and uses packet switching, the PSTN is an analog, circuit-switched network.

To transmit computer data over the PSTN, the digital signals generated by your computer must be converted to analog signals that the telephone network can carry. A modem takes the digital signals fed to it through a serial port or the system bus, converts them to analog signals and transmits them over the PSTN. At the other end of the PSTN connection, another modem performs the same process in reverse, converting the analog data back into its digital form and sending it to another computer. The combination of the interfaces to the two computers, the two modems, and the PSTN connection form the physical layer of the networking stack.

The first modems used proprietary protocols for the digital/analog conversions, but this meant that users had to use the same manufacturer's modems at each end of the PSTN connection. To standardize modem communications, organizations like the Comité Consultatif International Télégraphique et Téléphonique (CCITT), now known as the International Telecommunication Union (ITU), began developing specifications for the communication, compression, and error-detection protocols that modems use when generating and interpreting their analog signals. Today, virtually all available modems support a long list of protocols that can serve as a history of modem communications. The current industry standard modem communication protocol is V.90, which defines the 56 kilobytes per second (Kbps) data transfer mode that most modem connections use today. The PSTN was designed for voice transmissions, not data transmissions. As a result, connections are relatively slow, with a maximum speed of only 33.6 Kbps when both communicating devices use analog PSTN connections. A 56-Kbps connection requires that one of the connected devices have a digital connection to the PSTN. The quality of PSTN connections can also vary widely, depending on the location of the modems and the state of the cables connecting the modems to their respective central offices. In some areas, the PSTN cabling can be many decades old and connections suffer as a result. When modems detect errors while transmitting data, they revert to a slower transmission speed. This is one reason that the quality of modem connections can vary from minute to minute. Dedicated, permanent PSTN connections between two locations, called leased lines, are also available (in both analog and digital forms) and provide a more consistent quality of service, but they lack the flexibility of dial-up connections and they are quite expensive.

Configuring a modem

As with most computer peripherals these days, the majority of available modems support the Plug and Play standard, which enables operating systems to detect the modem's presence, identify its manufacturer and model and install and configure the appropriate driver for it. As with most hardware peripherals, modems use an interrupt request (IRQ) line and an input/output (I/O) port address to send signals to the computer. With external modems, the IRQ and I/O address are assigned to the serial port that you use to connect the modem to the computer. Most computers are equipped with two serial ports, which are assigned to two of the computer's four default communications (COM) ports, COM1 and COM2. Each COM port has its own I/O port address, but COM1 and COM3 share IRQ4, and COM2 and COM4 share IRQ3.
Internal modems plug into a bus slot instead of a serial port, so you must configure the modem itself to use a particular COM port, which specifies the IRQ and I/O address assignments. If you have other devices plugged into any of the computer's serial ports, you must be sure that the modem is not configured to use the same IRQ as the ports in use. The other configuration parameter you should be familiar with is the maximum port speed. Serial ports use a chip called a universal asynchronous receiver-transmitter (UART) to manage the communications of the device connected to the port. Most computers today have 16550 UART chips for both of their serial ports, which can run as fast as 256 Kbps. Older computers might have slower UART chips, such as the 16450, which runs at a maximum of 115.2 Kbps. Some computers even have a 16550 UART on one port and a slower chip on the other. For today's high-speed modems, you should always use a 16550 UART. Internal modems have their own UART chips built onto the card, which are nearly always 16550 UART chips.


Virtual Private Networks

One of the advantages of using the PSTN to connect a computer to a distant network is that no special service installation is required and the only hardware you need is a modem and a telephone jack. This means that users with portable computers can dial into their office networks wherever they happen to be. However, dialing into a distant network using the PSTN can be an expensive proposition, especially when a company has a large number of network users traveling to distant places. One way to minimize these long-distance telephone charges is to use what is known as a virtual private network (VPN) connection. A VPN is a connection between a remote computer and a server on a private network that uses internet as it's network medium. The network is permanently connected to internet and has a server that is configured to receive incoming VPN connections through the Internet. The remote user connects to the Internet by using a modem to dial in to a nearby ISP. There are many ISPs that offer national and even international service, so the user can connect to internet with a local telephone call. The remote computer and the network server then establish a secured connection that protects the data exchanged between them, using internet as the network medium. This technique is called tunneling, because the connection runs across internet inside a secure conduit. The primary protocol that makes this tunneling possible is the Point-to-Point Tunneling Protocol. PPTP works with PPP to establish a connection between the client computer and a server on the target network, both of which are connected to internet. The connection process begins with the client computer dialing up and connecting to a local ISP using the standard PPP connection establishment process. When the computer is connected to the Internet, it establishes a control connection to the server using the Transmission Control Protocol (TCP). This control connection is the PPTP tunnel through which the computers transmit and receive all subsequent data.
When the tunnel is in place, the computers send their data through it by encapsulating the PPP data that they would normally transmit over a dial-up connection within Internet Protocol (IP) datagrams. The computer then sends the datagrams through the tunnel to the other computer. Although it violates the rules of the Open Systems Interconnection (OSI) model, you actually have a data-link layer frame being carried within a network layer datagram. The PPP frames are encapsulated by IP, but at the same time, they can also contain other IP datagrams that contain the actual user data that one computer is sending to the other. Thus, the messages transmitted through the TCP connection that forms the tunnel are IP datagrams that contain PPP frames, with the PPP frames containing messages generated by IP or any network layer protocol. In other words, because the PPP user data is secured within the IP datagrams, that data can be another IP data-gram or an Internetwork Packet Exchange (IPX) or NetBIOS Enhanced User Interface (NetBEUI) message. Because the tunnel is encrypted and secured using an authentication protocol, the data is protected from interception. After the IP datagrams pass through the tunnel to the other computer, the PPP frames are extracted and processed by the receiver in the normal manner.


Integrated Services Digital Network

Although it has only recently achieved modest popularity in the United States, the Integrated Services Digital Network (ISDN) has been around for several decades, and is especially popular in Europe, where leased telephone lines are prohibitively expensive. ISDN is a digital communications service that uses the same network infrastructure as the PSTN. It was designed as a complete digital replacement for the analog telephone system, but it had few supporters in the United States until relatively recently, when the need for faster internet connections led people to explore its capabilities. However, other high-speed internet access solutions, such as Digital Subscriber Line (DSL) and cable television (CATV) networks, have also become available in recent years. These other solutions are generally faster and cheaper than ISDN and have largely eclipsed it in popularity. ISDN is a dial-up service, like the PSTN, but its connections are digital, so no modems are required. Although ISDN can support specially made telephones, fax machines and other devices, most ISDN installations in the United States are used only for computer data transmissions. Because it's a dial-up service, you can use ISDN to connect to different networks. For example, if you have an ISDN connection to internet, you can change ISPs simply by dialing a different number. No intervention from the telephone company is required. However, because ISDN needs special equipment, it cannot be used in mobile devices, such as laptop computers. ISDN also delivers greater transmission speeds than PSTN connections. The ISDN Basic Rate Interface (BRI) service consists of two 64-Kbps channels (called B channels) that carry the actual user data, plus one 16-Kbps channel (called a D channel) that carries only control traffic. Because of these channel names, the BRI service is sometimes called 2B+D. The B channels can function separately or be combined into a single 128-Kbps connection. A higher grade of service, called Primary Rate Interface (PRI), consists of 23 B channels and one 64-Kbps D channel. The total bandwidth is the same as that of a T1 leased line. PRI is not often used in the United States. ISDN uses the same wiring as the PSTN, but additional equipment is required at the terminal locations. The telephone company provides what is called a U interface, which connects to a device called a Network Terminator 1 (NT-1). The NT-1 can provide a four-wire connection, called an S/T interface, for up to seven devices, called terminal equipment (TE). Digital devices designed for use with ISDN, such as ISDN telephones and fax machines, connect directly to the S/T interface and are called TE1 devices. A device that can't connect directly to the S/T interface is called a TE2 device, and requires a terminal adapter, which connects to the S/T interface and provides a jack for
the TE2 device.


When you plan to connect multiple devices to the ISDN service, you purchase an NT-1 as a separate unit. However, most U.S. ISDN installations use the service solely for internet access, so there are many products on the market that combine an NT-1 and a terminal adapter into a single unit. These combined ISDN solutions can take the form of expansion cards that plug into a bus slot or separate units that connect to the computer's serial port. ISDN has never become hugely popular in the United States, partly because of it's reputation for being expensive and for installation and reliability problems.

Digital Subscriber Line

DSL is a blanket term for a variety of digital communication services that use standard telephone lines and provide data transfer speeds much greater than the PSTN or even ISDN. The various DSL service types each have a different descriptive word added to the name, which is why some sources use the generic abbreviation xDSL.

Many DSL services run at different upstream and downstream speeds. These are called asymmetrical services. This happens because the nature of some DSL signals causes greater levels of crosstalk in the data traveling from the customer site to the central office than in the other direction. For end-user Internet access, this is usually not a problem, because Web surfing and other common activities generate far more downstream than upstream traffic. DSL services are also subject to distance restrictions, just like ISDN. DSL provides higher transmission rates by utilizing high frequencies that standard telephone services don't use and by employing special signaling schemes. For this reason, in many cases, you can use your existing telephone lines for a DSL connection and for voice traffic at the same time. The most common DSL services are HDSL, used by phone companies and large corporations for wide area network (WAN) links, and ADSL, which is the service that ISPs use to provide Internet access to end users. DSL is an excellent internet access solution and it can be suitable for connecting a home user to an office LAN, as long as the upstream bandwidth is suitable for your needs. The additional hardware needed for an ADSL connection is an ADSL Termination Unit-Remote (ATU-R), sometimes called a DSL transceiver or a DSL modem, plus a line splitter if you will also be using the line for voice traffic. A DSL modem is not really a modem, as it does not convert signals between digital and analog formats (all DSL communications are digital). The ATU-R connects to your computer using either a standard Ethernet network interface adapter or a universal serial bus (USB) port. At the other end of the link at the ISP's site is a more complicated device called a Digital Subscriber Line Access Multiplexer (DSLAM). Unlike ISDN connections, DSL connections are direct, permanent links between two sites that remain connected at all times. This means that if you use DSL to connect to internet, the telephone company installs the DSL connection between your home or office and the ISP's site. If you want to change your ISP, the phone company must install a new link. In many cases, however, telephone companies are themselves offering DSL internet access, which eliminates one party from the chain.

CATV

All of the remote connection technologies described up to this point rely on cables installed and maintained by telephone companies. However, the CATV industry has also been installing a vast network infrastructure throughout most of the United States over the past few decades. In recent years, many CATV systems have started taking advantage of their networks to provide Internet access to their customers through the same cable used for the TV service. CATV Internet access is very fast - sometimes as fast as 512 Kbps or more - and usually quite inexpensive. CATV networks use broadband transmissions, meaning that the one network medium carries many discrete signals at the same time. Each of the TV channels you receive over cable is a separate signal, and all of the signals arrive over the cable simultaneously. By devoting some of this bandwidth to data transmissions, CATV providers can deliver internet data at the same time as the television signals. If you already have CATV, installing the internet service is simply a matter of connecting a splitter to the cable and running it to a device called (again, erroneously) a cable modem, which is connected to an Ethernet card in your computer.

CATV data connections are different from both ISDN and DSL connections because they are not dedicated links. In effect, you are connecting to a metropolitan area network (MAN) run by your cable company. If you run Microsoft Windows on your computer and attempt to browse the network, you will see your neighbors' computers on the same network as yours. This arrangement has the potential to cause two major problems. First, you are sharing your Internet bandwidth with all of the other users in your area. During peak usage periods, you might notice a significant slowdown in your Internet downloads. ISDN and DSL, by contrast, are not shared connections, so you have the full bandwidth you're paying for available at all times. The second potential problem is one of security. If you share a drive on your computer without protecting it with passwords, anyone else on the network can access your files, modify them, or even delete them. Computers connected to the Internet with cable modems are also prone to attack from outside. Many users are duped into downloading programs that enable malicious outside users to take over their computers and use them for nefarious purposes. The installers from the cable company are usually careful to disable file sharing on your computer, however, and there are personal firewall products that you can use to provide yourself with additional protection. Like most DSL services, CATV data connections are asymmetrical. CATV networks are designed to carry signals primarily in one direction, from the provider to the customer. There is a small amount of upstream bandwidth, which some systems use for purposes such as ordering pay-per-view movies from your remote control, and part of this upstream bandwidth is allocated for internet traffic. In most cases, the upstream speed of a CATV connection is far less than the downstream speed, making the service unsuitable for hosting your own internet servers, but still faster than a PSTN connection. CATV connections are an inexpensive and fast internet access solution, but you can't use them to connect your home computer to your office LAN, unless you use a VPN connection through internet. If you plan to implement VPNs, be sure that the cable modem you are using supports them.

Satellite Connections

Geosynchronous communications satellites are another means for connecting stand-alone computers to internet. With a satellite dish like those used for TV reception, a computer can receive downstream traffic from an ISP's network at speeds comparable to those of DSL and CATV networks. However, satellite connections are one-way only; there is no upstream traffic from the subscriber's computer to the satellite. Therefore, you must maintain a standard dial-up connection to the ISP's network to transmit signals to the Internet. As with CATV network connections, a satellite link is not suitable for remote connections to a private network, and the use of a PSTN line for upstream traffic makes even VPN connections unlikely to be practical.

Terminal Connections

There is another type of remote connection that some networks use within a single site, instead of between sites. Thin client computing involves the use of a terminal client program running on a low-end computer or a dedicated network client device that communicates with a terminal server elsewhere on the network. The role of the client is to provide the interface to the operating system and nothing more; the actual operating system and all applications run on the terminal server. The client and the server communicate using a specialized protocol, such as Independent Computing Architecture (ICA), developed by Cyrix Systems, Inc. This protocol carries keystrokes, mouse actions, and screen updates between the client and the server, enabling a user at the client side to function as though the applications are running locally, when they are actually running at the server. Thin client computing enables a network to use inexpensive machines for its clients, leaving most of the computing environment on the server, where administrators can easily monitor and maintain it.

Remote Connection Requirements

In addition to a physical layer connection, there are other elements you need to establish a remote network connection, including the following:

Common protocols
The two computers to be connected must share common protocols at the data-link layer and above. This means that you must configure both computers to use a data-link layer protocol suitable for point-to-point connections, such as PPP or SLIP, and that there must also be network and transport layer protocols in common, such as Transmission Control Protocol/Internet Protocol (TCP/IP), IPX or NetBEUI.

TCP/IP configuration
If your remote computer will be using the TCP/IP protocol suite to communicate with the host network, the computer must be assigned an IP address and other configuration parameters appropriate for that network. You can configure the TCP/IP settings if someone familiar with the host network supplies them to you, but most remote networking solutions enable the network server to assign configuration parameters automatically using Dynamic Host Configuration Protocol (DHCP) or some other mechanism.

Host and remote software
Each of the computers to be connected must be running an application appropriate to its role. The remote (or client) computer needs a client program that can use the physical layer medium to establish a connection, by instructing the modem to dial a number, for example. The host (or server) computer must have a program that can respond to a connection request from the remote computer and provide access to the network. In Microsoft Windows 2000, the client is found in the Network And Dial-Up Connections control panel, and the server is called the Remote Access Service (RAS), which is incorporated into the Routing and Remote Access Server.

Security
The host computer and the other systems on the network to which it is attached must have security mechanisms in place that control access to network resources. These mechanisms must ensure that only authorized users are permitted access and restrict the access of authorized users to the resources they need.

Advanced TCP/IP properties

The IP Settings Tab

The IP Settings tab of the Advanced TCP/IP Settings dialog box enables you to specify multiple IP addresses and subnet masks for the network interface adapter in your computer, as well as multiple default gateway addresses. Most computers with multiple IP addresses have multiple network interface adapters as well, using one address per network interface adapter. However, there are situations in which a computer can use more than one IP address for a single network interface adapter, such as when a single physical network hosts multiple TCP/IP subnets. In such cases, a computer needs an IP address on each of the two subnets to participate on both.

When you open the Advanced TCP/IP Settings dialog box, the parameters you have already configured elsewhere in the Internet Protocol (TCP/IP) Properties dialog box appear in the listings. You can add to the existing settings, modify them, or delete them altogether. To add a new IP address and subnet mask, click Add, enter the desired address and mask values in the TCP/IP Address dialog box, and then click Add to add your entries to the IP Addresses list. Windows 2000 supports an unlimited number of IP address/subnet mask combinations for each network interface adapter in the computer. The procedure for creating additional default gateways is the same as that for adding IP addresses. A computer can use only one default gateway at a time, however, so the ability to specify multiple default gateways in the Advanced TCP/IP Settings dialog box is simply a fault-tolerance mechanism. If the first default gateway in the list is unavailable for any reason, Windows 2000 sends packets to the second address listed. This practice assumes that the computer is connected to a LAN that has multiple routers on it, each of which provides access to the rest of the internetwork.


The DNS Tab

The DNS tab of the Advanced TCP/IP Settings dialog box also provides a fault-tolerance mechanism for Windows 2000's DNS client. You can specify more than the two DNS server addresses provided in the main Internet Protocol (TCP/IP) Properties dialog box, and you can modify the order in which the computer uses them if one or more of the servers should be unavailable. The other controls in the DNS tab control how the TCP/IP client resolves unqualified names. An unqualified name is an incomplete DNS name that does not specify the domain in which the host resides. The Windows 2000 TCP/IP client can still resolve these names by appending a suffix to the unqualified name before sending it to the DNS server for resolution. For example, with a properly configured TCP/IP client, you can supply only the name www as a URL in your Web browser and the client appends your company's domain name to the URL as a suffix. The DNS controls enable you to configure the client to append the primary and connection-specific DNS suffixes to unqualified names, or you can create a list of suffixes that the client will append to unqualified names, one after the other, until the name resolution process succeeds. The primary DNS suffix is the domain name you specify for the computer in the Network Identification tab of the System dialog box, accessed from the Control Panel. This suffix applies to all of the computer's network interface adapters. You can create a connection-specific suffix by entering a domain name in the DNS Suffix For This Connection text box in the DNS tab. To create a list of suffixes, select the Append These DNS Suffixes (In Order) option, click Add, enter the suffix you want to add to the list, and click Add. The two check boxes at the bottom of the DNS tab enable you to specify whether the computer should register it's DNS name with its designated DNS server. This option requires a DNS server that supports dynamic updates, such as the DNS Server service supplied with Windows 2000 Server. The Register This Connection's Addresses In DNS check box causes Windows 2000 to use the system's primary DNS suffix to register the addresses and the Use This Connection's DNS Suffix In DNS Registration check box causes the computer to use the connection-specific suffix you've entered in the DNS Suffix For This Connection text box.

The WINS Tab

Windows 2000 includes a WINS client for NetBIOS name resolution, but on a Windows 2000 network that uses Active Directory, WINS is not needed because Active Directory uses DNS names for the computers on the network and relies on DNS for its name resolution services. However, if you run Windows 2000 systems that use Microsoft Windows NT domains or no directory service at all, you can use the Advanced TCP/IP Settings dialog box's WINS tab to configure the Microsoft TCP/IP client to use WINS. Click Add in the WINS tab to open the TCP/IP WINS Server dialog box, in which you can specify the address of a WINS server on your network. You can create a list of WINS servers and specify the order in which Windows 2000 should use them. As with the default gateway and DNS server settings, supplying multiple WINS server addresses is a fault-tolerance feature. The Enable LMHOSTS Lookup check box forces the computer to use a file called LMHOSTS to resolve NetBIOS names before contacting the designated WINS server. LMHOSTS is a text file found, by default, in the \Winnt\System32\ Drivers\Etc folder on the computer's local drive, which contains a list of NetBIOS names and their equivalent IP addresses. LMHOSTS functions in much the same way as the HOSTS file, which was used for host name resolution before the advent of DNS. Because each computer must have its own LMHOSTS file, Windows 2000 enables you to import a file from a network drive to the local computer. To do this, click Import LMHOSTS and browse for the desired file. Using the options at the bottom of the WINS tab, you can specify whether the computer should or should not use NetBIOS over TCP/IP, or whether the computer should rely on a DHCP server to specify the NetBIOS setting. Once again, on a Windows 2000 network that uses Active Directory, you can disable NetBIOS over TCP/IP because the computers use DNS names instead of NetBIOS names.

Using the IPSec Protocol

The IP Security option controls whether the Microsoft TCP/IP client uses the IPSec protocol when communicating with other computers on the network. IPSec is a security protocol that provides end-to-end encryption of data transmitted over a network. By default, IPSec is disabled in Windows 2000, but you can activate it. To open the IP Security dialog box, select IP Security and click Properties. When IPSec is enabled, computers perform an IPSec negotiation before they begin transmitting data to each other. This negotiation enables each computer to determine if the other computer supports IPSec and what policies are in place to govern it's use.

When you select the Use This IP Security Policy option in the IP Security dialog box, you can select one of the following policies, which govern when the computer should use the IPSec protocol:

Client (Respond Only). This option causes the computer to use the IPSec protocol only when another computer requests it.
Secure Server (Require Security). This option causes the computer to require IPSec for all communications. Connections requested by other computers that are not configured to use IPSec are refused.
Server (Request Security). This option causes the computer to request the use of IPSec for all communications, but not to require it. If the other computer does not support IPSec, communications proceed without it.


Using TCP/IP Filtering

The TCP/IP Filtering option is essentially a rudimentary form of firewall that you can use to control what kinds of network and transport layer traffic can pass over the computer's network interface adapters. By selecting the TCP/IP Filtering option in the Options tab and clicking Properties, you open the TCP/IP Filtering dialog box. In this dialog box, you can specify which protocols and which ports the computer can use. Selecting the Enable TCP/IP Filtering (All Adapters) check box activates three separate selectors, one for TCP ports, one for UDP ports, and one for IP protocols. By default, all three selectors permit all traffic to pass through the filters, but selecting the Permit Only option on any selector enables you to build a list of permitted ports or protocols. The filters prevent traffic generated by all unlisted ports and protocols from passing through any of the computer's network interface adapters.

TCP/IP utilities

Ping

Ping is the most basic of the TCP/IP utilities. Virtually every TCP/IP implementation includes a version of it. Ping can tell you if the TCP/IP stack of another system on the network is functioning normally. The ping program generates a series of Echo Request messages using the Internet Control Message Protocol (ICMP) and transmits them to the computer whose name or IP address you specify on the command line. The basic syntax of the ping program is as follows:

ping target

The target variable contains the IP address or name of a computer on the network. You can use either DNS names or NetBIOS names in ping commands. The program resolves the name into an IP address before sending the Echo Request messages, and it then displays the address in it's readout. Most Ping implementations also have command-line switches that enable you to modify the operational parameters of the program, such as the number of Echo Request messages it generates and the amount of data in each message. All TCP/IP computers must respond to any Echo Request messages they receive that are addressed to them by generating Echo Reply messages and transmitting them back to the sender. When the pinging computer receives the Echo Reply messages, it produces a display like the following:

Pinging cz1 [192.168.2.10] with 32 bytes of data:
Reply from 192.168.2.10: bytes=32 time<10ms ttl="128" bytes="32" ttl="128" bytes="32" ttl="128" bytes="32" ttl="128" sent =" 4," received =" 4," lost =" 0" minimum =" 0ms," maximum =" 0ms," average =" 0ms">


Traceroute

Traceroute is a variant of the Ping program that displays the path that packets take to their destination. Because of the nature of IP routing, paths through an internetwork can change from minute to minute, and Traceroute displays a list of the routers that are currently forwarding packets to a particular destination. Traceroute uses ICMP Echo Request and Echo Reply messages just like ping, but it modifies the messages by changing the value of the TTL field in the IP header. The TTL field is designed to prevent packets from getting caught in router loops that keep them circulating endlessly around the network. The computer generating the packet normally sets a relatively high value for the TTL field; on Windows systems, the default value is 128. Each router that processes the packet reduces the TTL value by one. If the value reaches zero, the last router discards the packet and transmits an ICMP error message back to the original sender. When you start the traceroute program with the name or IP address of a target computer, the program generates its first set of Echo Request messages with TTL values of 1. When the messages arrive at the first router on their path, the router decrements their TTL values to 0, discards the packets, and reports the errors to the sender. The error messages contain the router's address, which the traceroute program displays as the first hop in the path to the destination. Traceroute's second set of Echo Request messages use a TTL value of 2, causing the second router on the path to discard the packets and generate error messages. The Echo Request messages in the third set have a TTL value of 3 and so on. Each set of packets travels one hop farther than the previous set before causing a router to return error messages to the source. The list of routers displayed by traceroute as the path to the destination is the result of these error messages. Traceroute can be a handy tool for isolating the location of a network communications problem. Ping simply tells you whether or not a problem exists; it can't tell you where. A failure to contact a remote computer could be due to a problem in your workstation, in the remote computer or in any of the routers in between. Traceroute can tell you how far your packets are going before they run into the problem.

IPconfig

UNIX systems have a program called ifconfig (the name is derived from interface configuration) that you use to assign TCP/IP configuration parameters to a particular network interface. Running ifconfig with just the name of an interface displays the current configuration of that interface. Windows 2000 and NT have a version of this program, IPCONFIG.EXE, which omits the configuration capabilities and retains the configuration display. Windows Me, 95 and 98 include a graphical version of the utility called WINIPCFG.EXE.

Running the program with no parameters displays a limited list of configuration data.
Both IPCONFIG.EXE and WINIPCFG.EXE also have another function. These utilities are often associated with DHCP, because there is no easier way on a Windows system to see what IP address and other parameters the DHCP server has assigned to your computer. However, these programs also enable you to manually release IP addresses obtained through DHCP and renew existing leases. By running IPCONFIG.EXE with the /release and /renew command-line parameters or by using the Release, Renew, Release All, or Renew All buttons in WINIPCFG.EXE, you can release or renew the IP address assignment of one of the network interfaces in the computer or for all of the interfaces at once.

ARP

The Address Resolution Protocol (ARP) enables a TCP/IP computer to convert IP addresses to the hardware addresses that data-link layer protocols need to transmit frames. IP uses ARP to discover the hardware address to which each of its datagrams will be transmitted. To minimize the amount of network traffic ARP generates, the computer stores the resolved hardware addresses in a cache in system memory. The information remains in the cache for a short period of time (usually between 2 and 10 minutes), in case the computer has additional packets to send to the same address.

Windows systems include a command-line utility called ARP.EXE that you can use to manipulate the contents of the ARP cache. For example, you can use ARP.EXE to add the hardware addresses of computers you contact frequently to the cache, thus saving time and network traffic during the connection process. Addresses that you add to the cache manually are static, meaning that they are not deleted after the usual expiration period. The cache is stored in memory only, however, so it is erased when you reboot the computer. If you want to preload the cache whenever you boot your system, you can create a batch file containing ARP.EXE commands and execute it from the Windows Startup group.

ARP.EXE uses the following syntax:
ARP [-a {ipaddress}] [-n ipaddress] [-s ipaddress hwaddress {interface}] [-d ipaddress {interface}]

-a {ipaddress} This parameter displays the contents of the ARP cache. The optional ipaddress variable specifies the address of a particular cache entry to be displayed.
-n ipaddress This parameter displays the contents of the ARP cache, where ipaddress identifies the network interface for which you want to display the cache.
-s ipaddress hwaddress {interface} This parameter adds a new entry to the ARP cache, where the ipaddress variable contains the IP address of the computer, the hwaddress variable contains the hardware address of the same computer, and the interface variable contains the IP address of the network interface in the local system for which you want to modify the cache.
-d ipaddress {interface} This parameter deletes the entry in the ARP cache that is associated with the computer represented by the ipaddress variable. The optional interface variable specifies the cache from which the entry should be deleted.

Netstat

Netstat is a command-line program that displays information about the current network connections of a computer running TCP/IP and about the traffic generated by the various TCP/IP protocols. On UNIX computers, the program is simply called netstat, and on Windows computers, it's called NETSTAT.EXE. The command-line parameters differ for the various implementations of Netstat, but the information they display is roughly the same. The syntax for the Windows version of NETSTAT.EXE is as follows:

NETSTAT [interval] [-a] [-p protocol] [-n] [-e] [-r] [-s]

interval Refreshes the display every interval seconds until the user aborts the command.
-a Displays the current network connections and the ports that are currently listening for incoming network connections.
-p protocol Displays the currently active connections for the protocol specified by the
protocol variable.
-n When combined with other parameters, causes the program to identify computers using IP addresses instead of names.
-e Displays incoming and outgoing traffic statistics for the network interface, broken down into bytes, unicast packets, nonunicast packets, discards, errors and unknown protocols.
-r Displays the routing table plus the current active connections.
-s Displays detailed network traffic statistics for the IP, ICMP, TCP and UDP protocols.

NBTSTAT.EXE

NBTSTAT.EXE is a Windows command-line program that displays information about the NetBIOS over TCP/IP connections that Windows uses when communicating with other Windows computers on the TCP/IP LAN. The syntax for NBTSTAT.EXE is as follows:

NBTSTAT [-a name] [-A ipaddress] [-c] [-n] [-r] [-R] [-s] [-S] [-RR]

-a name Displays the NetBIOS names registered on the computer identified by the name variable.
-A ipaddress Displays the NetBIOS names registered on the computer identified by the
ipaddress variable.
-c Displays the contents of the local computer's NetBIOS name cache.
-n Displays the NetBIOS names registered on the local computer.
-r Displays the number of NetBIOS names registered and resolved by the local computer, using both broadcasts and WINS.
-R Purges the local computer's NetBIOS name cache of all entries and reloads the LMHOSTS file.
-s Displays a list of the computer's currently active NetBIOS settings (identifying remote computers by name), their current status, and the amount of data transmitted to and received from each system.
-S Displays a list of the computer's currently active NetBIOS settings (identifying remote computers by IP address), their current status, and the amount of data transmitted to and received from each system.
-RR Sends name release requests to WINS, then starts refresh.

Nslookup

Nslookup (on UNIX systems) and NSLOOKUP.EXE (in Windows 2000 and NT) are command-line utilities that enable you to generate DNS request messages and transmit them to specific DNS servers on the network. The basic syntax of NSLOOKUP.EXE is as follows:

NSLOOKUP DNSname DNSserver

DNSname Specifies the DNS name that you want to resolve.
DNSserver Specifies the DNS name or IP address of the DNS server that you want to query for the name specified in the DNSname variable.

The advantage of Nslookup is that you can test the functionality and the quality of the information on a specific DNS server by specifying it on the command line. By running Nslookup with no command-line parameters, you can use the program in interactive mode, which lets you employ some of it's many options.

Telnet

The Telecommunications Network Protocol (Telnet) is a command-line client/server program that essentially provides remote control capabilities for computers on a network. A user on one computer can run a Telnet client program and connect to the Telnet server on another computer. Once connected, that user can execute commands on the other system and view the results. It's important to distinguish this type of remote control access from simple access to the remote file system. When you use a Telnet connection to execute a program on a remote computer, the program actually runs on the remote computer. By contrast, if you use Windows to connect to a shared drive on another computer and execute a program, the program runs on your computer. Telnet was originally designed for use on UNIX systems and it is still an extremely important tool for UNIX network administrators. The various Windows operating systems all include a Telnet client. Windows 2000 has a strictly command-line client, but Windows NT, Me, 95 and 98 have a semigraphical client that still provides command-line access to servers. Only Windows 2000 and later versions have a Telnet server because Windows is primarily a graphical operating system and there isn't as much that you can do on a Windows server when you are connected to it with a character-based client like Telnet.

FTP

The File Transfer Protocol (FTP) is similar to Telnet, but it is designed for performing file transfers instead of executing remote commands. FTP includes basic file management commands that can create and remove directories, rename and delete files and manage access permissions. FTP has become a mainstay of Internet communications in recent years, but it also performs a vital role in communications between UNIX computers, all of which have both FTP client and server capabilities. All Windows computers have a character-based FTP client, but FTP server capabilities are built into the Internet Information Service (IIS) application that is included with Windows 2000 Server and Windows NT Server products. Generally speaking, Windows computers don't need FTP for communications on a LAN because they can access the shared files on other computers directly. On many UNIX networks, however, FTP is an important tool for transferring files to and from remote computers.


DHCP

The core protocols that TCP/IP uses to provide communication between computers (IP,TCP and UDP) rely on several other services to perform their functions. Some of these services take the form of independent protocols, such as the Address Resolution Protocol (ARP), which runs on every TCP/IP computer and enables IP to discover the hardware address of a computer using a particular IP address. Other services, such as the Dynamic Host Configuration Protocol (DHCP) and the Domain Name System (DNS), are both protocols and applications that run on their
own servers.


DHCP origins

Over the years, the developers of the TCP/IP protocols have worked out several solutions that address the problem of configuring the TCP/IP settings for large numbers of workstations. The first of these was the Reverse Address Resolution Protocol (RARP), which was designed for diskless workstations that had no means of permanently storing their TCP/IP settings. RARP is essentially the opposite of ARP. Whereas ARP broadcasts an IP address in an effort to discover its equivalent hardware address, RARP broadcasts the hardware address. An RARP server then responds by transmitting the IP address assigned to that client computer. RARP was suitable for use with diskless workstations on early TCP/IP networks, but it isn't sufficient for today's needs because it supplies the computer with only an IP address. It provides none of the other settings needed by a typical workstation today, such as a subnet mask and a default gateway.

The next attempt at an automatic TCP/IP configuration mechanism was called the Bootstrap Protocol (BOOTP). BOOTP does more than RARP, which is why it is still used today, whereas RARP is not. BOOTP enables a TCP/IP workstation to retrieve settings for all of the configuration parameters it needs to run, including an IP address, subnet mask, default gateway and DNS server addresses. A workstation can also download an executable boot file from a BOOTP server, using the Trivial File Transfer Protocol (TFTP), which makes it clear that BOOTP, like RARP, was designed for diskless workstations. The drawback of BOOTP is that although it is capable of performing all the TCP/IP client communication tasks required by today's computers, an administrator must still specify the settings for each workstation on the BOOTP server. There is no mechanism for automatically assigning a unique IP address to each computer, nor is there any means of preventing two computers from receiving the same IP address due to
administrator error.

Microsoft developed DHCP for the express purpose of addressing the shortcomings in RARP and BOOTP. DHCP is based on BOOTP to a great extent, but instead of simply feeding predetermined configuration parameters to TCP/IP clients, DHCP can dynamically allocate IP addresses from a pool and reclaim them when they are no longer in use. This prevents workstations from being assigned duplicate IP addresses and enables administrators to move computers around between subnets without manually reconfiguring them. In addition, DHCP can deliver a wide range of configuration parameters to TCP/IP clients, including platform-specific parameters added by third-party developers.

DHCP architecture

DHCP consists of three components: a client, a server, and the protocol that they use to communicate with each other. Most TCP/IP implementations these days have DHCP integrated into the networking client, even if the operating system doesn't specifically refer to it as such. On a Microsoft Windows 2000 system, for example, in the Internet Protocol (TCP/IP) Properties dialog box, when you select Obtain An IP Address Automatically, you are actually activating the DHCP client. The DHCP server is an application that runs on a computer and exists to service requests from DHCP clients. The Windows 2000 Server and Microsoft Windows NT Server operating systems both include the DHCP server application, but there are many other implementations available for other platforms as well. DHCP is widely used on UNIX, Novell NetWare and Microsoft networks. Any DHCP client can retrieve configuration settings from a DHCP server running on any platform. Despite having been developed largely by Microsoft, DHCP is based on public BOOTP standards and is published as an open TCP/IP standard.

The core function of DHCP is to assign IP addresses. This is the most complicated part of the service, because the IP address must be unique for each client computer. The DHCP standard defines three types of IP address allocation, as follows:

Manual allocation.
An administrator assigns a specific IP address to a computer in the DHCP server and the server provides that address to the computer when it is requested.

Automatic allocation.
The DHCP server supplies clients with IP addresses taken from a common pool of addresses, and the clients retain the assigned addresses permanently.

Dynamic allocation.
The DHCP server supplies IP addresses to clients from a pool on a leased basis. The client must periodically renew the lease or the address returns to the pool for reallocation.

Manual allocation is the functional equivalent of BOOTP address assignment. This option saves the least administrative labor, but it is necessary for systems that require permanently assigned IP addresses, such as Internet servers that have DNS names associated with specific addresses. Administrators could conceivably configure the TCP/IP clients of these computers directly, but using the DHCP server for the assignment prevents IP addresses from being
accidentally duplicated.


Automatic allocation is a fitting solution for networks on which administrators rarely move workstations between subnets. Assigning IP addresses from a pool (called a scope) eliminates the need to furnish a specific address for each computer and prevents address duplication. Permanently assigning those addresses minimizes the network traffic generated by DHCP client/server communications.

Once the server is configured, dynamic allocation completely automates the TCP/IP client configuration process, enabling administrators to add, remove and relocate computers as needed. When a computer boots, the server leases an address to the computer for a given period of time, renews the lease if the computer remains active, reclaims the address when it is no longer in use and returns the address to the pool.

DHCP message format

Communications between DHCP clients and servers use a single message format. All DHCP messages are carried within UDP datagrams, using the well-known port numbers 67 at the server and 68 at the client, as established by the Internet Assigned Numbers Authority (IANA).
The functions of the fields in the DHCP message are as follows:

op (1 byte). Specifies whether the message originated at a client or a server.
htype (1 byte). Specifies the type of hardware address in the chaddr field.
hlen (1 byte). Specifies the length of the hardware address in the chaddr field, in bytes.
hops (1 byte). Specifies the number of routers in the path between the client and the server.
xid (4 bytes). Contains a transaction identifier used to associate requests and replies.
secs (2 bytes). Specifies the elapsed time (in seconds) since the beginning of an address allocation or lease renewal process.
flags (2 bytes). Indicates whether or not DHCP servers and relay agents should use broadcast transmissions to communicate with a client instead of unicast transmissions.
ciaddr (4 bytes). Contains the client computer's IP address when it is in the bound, renewal, or rebinding state.
yiaddr (4 bytes). Contains the IP address being offered to a client by a server.
siaddr (4 bytes). Specifies the IP address of the next server in a bootstrap sequence; used only when the DHCP server supplies an executable boot file to a diskless workstation.
giaddr (4 bytes). Contains the IP address of a DHCP relay agent located on a different network, when necessary.
chaddr (16 bytes). Contains the hardware address of the client system, using the type and length specified in the htype and hlen fields.
sname (64 bytes). Contains either the host name of the DHCP server or overflow data from the options field.
file (128 bytes). Contains the name and path to an executable boot file for diskless workstations
options (variable). Contains a series of DHCP options, which specify the configuration parameters for the client computer.
The options field is where the DHCP message carries all of the TCP/IP parameters assigned to a client, except for the IP address. Each option consists of three subfields.


The functions of the option subfields are as follows:

Code (1 byte). Specifies the function of the option.
Length (1 byte). Specifies the length of the data field.
Data (variable). Contains information specific to the option type.
Although it sounds like a contradiction in terms, there is one DHCP option that is required. This is the DHCP Message Type option, which contains a code that specifies the function of each message. There are eight possible values for this option, as follows:


1—DHCPDISCOVER. Used by clients to request configuration parameters from a DHCP server.
2—DHCPOFFER. Used by servers to offer IP addresses to requesting clients.
3—DHCPREQUEST. Used by clients to accept or renew an IP address assignment.
4—DHCPDECLINE. Used by clients to reject an offered IP address.
5—DHCPACK. Used by servers to acknowledge a client's acceptance of an offered IP address.
6—DHCPNAK. Used by servers to reject a client's acceptance of an offered IP address.
7—DHCPRELEASE. Used by clients to terminate an IP address lease.
8—DHCPINFORM. Used by clients to obtain additional TCP/IP configuration parameters
from a server.


DHCP communications

DHCP clients initiate communication with servers when they boot for the first time. The client generates a series of DHCPDISCOVER messages, which it transmits as broadcasts. At this point, the client has no IP address and is said to be in the init state. Like all broadcasts, these transmissions are limited to the client's local network, but administrators can install a DHCP Relay Agent service on a computer on the local area network (LAN), which relays the messages to DHCP servers on other networks. This enables a single DHCP server to service clients on multiple LANs. When a DHCP server receives a DHCPDISCOVER message from a client, it generates a DHCPOFFER message containing an IP address and whatever other optional parameters the server is configured to supply. In most cases, the server transmits this as a unicast message directly to the client. Because the client broadcasts it's DHCPDISCOVER messages, it may receive DHCPOFFER responses from multiple servers. After a specified period of time, the client stops it's broadcasting and accepts one of the offered IP addresses. To signal its acceptance, the client generates a DHCPREQUEST message containing the address of the server from which it is accepting the offer along with the offered IP address. Because the client has not yet configured itself with the offered parameters, it transmits the DHCPREQUEST message as a broadcast. This broadcast notifies the server that the client is accepting the offered address and also notifies the other servers on the network that the client is rejecting their offers.


On receipt of the DHCPREQUEST message, the server commits the offered IP address and other settings to it's database using a combination of the client's hardware address and the offered IP address as a unique identifier for the assignment. This is known as the lease identification cookie. To conclude its part of the transaction, the server sends a DHCPACK message to the client, acknowledging the completion of the process. If the server cannot complete the assignment (because it has already assigned the offered IP address to another system, for example), it transmits a DHCPNAK message to the client and the whole process begins again. As a final test, the client performs an ARP test to ensure that no other system on the network is using the assigned IP address. If no response is received, the DHCP transaction is completed and the client enters what is known as the bound state. If another system does respond, the client can't use the IP address and transmits a DHCPDECLINE message to the server, nullifying the transaction. The client can then reissue a series of DHCPDISCOVER messages, restarting the whole process.

DHCP leasing

The process by which a DHCP server assigns configuration parameters to a client is the same whether the server uses manual, automatic, or dynamic allocation. With manual and automatic allocation, this process is the end of the DHCP client/server communications. The client retains the settings assigned to it by the server until someone explicitly changes them or forces a reassignment. However, when the server dynamically allocates settings, the client leases it's IP address for a certain period of time (configured at the server) and must renew the lease to continue using it. The length of an IP address lease is typically measured in days and is generally based on whether computers are frequently moved around the network or whether IP addresses are in short supply. Shorter leases generate more network traffic but enable servers to reclaim unused addresses faster. For a relatively stable network, longer leases reduce the amount of traffic that DHCP generates. The lease renewal process begins when a bound client reaches what is known as the renewal time value, or T1 value, of its lease. By default, the renewal time value is 50 percent of the lease period. When a client reaches this point, it enters the renewing state and begins generating DHCPREQUEST messages. The client transmits the messages to the server that holds the lease as unicasts, unlike the broadcast DHCPREQUEST messages the client generates while in the init state. If the server is available to receive the message, it responds with either a DHCPACK message, which renews the lease and restarts the lease time clock, or a DHCPNAK message, which terminates the lease and forces the client to begin the address assignment process again from the beginning.

If the server does not respond to the DHCPREQUEST unicast message, the client continues to send them until it reaches the rebinding time value or T2 value, which defaults to 87.5 percent of the lease period. At this point, the client enters the rebinding state and begins transmitting DHCPREQUEST messages as broadcasts, soliciting an address assignment from any DHCP server on the network. Again, a server can respond with either a DHCPACK or DHCPNAK message. If the lease time expires with no response from any DHCP server, the client's IP address is released and all of its TCP/IP communication ceases, except for the transmission of DHCPDISCOVER broadcasts.

Releasing an IP address

It is also possible for a client to terminate an IP address lease at any time by transmitting a DHCPRELEASE message containing the lease identification cookie to the server. On a Microsoft Windows system, for example, you can do this manually, using the IPCONFIG.EXE utility in Windows 2000 and NT or the WINIPCFG.EXE utility in Windows Me, 98 or 95.

September 6, 2005

Routing

Routers operate at the network layer of the OSI reference model, so they can connect networks running different data-link layer protocols and different network media. On a small internetwork, a router's job can be quite simple. When you have LANs connected by one router, for example, the router simply receives packets from one network and forwards only those destined for the other network. On a large internetwork, however, routers connect several different networks together, and in many cases, networks have more than one router connected to them. This enables packets to take different paths to a destination. If one router on the network fails, packets can bypass it and still reach their destinations. In a complex internetwork, an important part of a router's job is to select the most efficient route to a packet's destination. Usually, this is the path that enables a packet to reach the destination with the fewest number of hops (by passing through the smallest number of routers). Routers share information about the networks to which they are attached with other routers. As a result, a composite picture of the internetwork eventually develops, but on a large internetwork such as the Internet, no single router possesses the entire image. Instead, the routers work together by passing each packet from router to router, one hop at a time.

A router can be a stand-alone hardware device or a regular computer. Operating systems like Microsoft Windows 2000, NT and Novell NetWare have the ability to route IP traffic, so creating a router out of a computer running one of these operating systems is simply a matter of installing two network interface adapters, connecting the computer to two different networks, and configuring it to route traffic between those networks. In TCP/IP parlance, a computer with two or more network interfaces is called a multihomed system. Microsoft Windows 95, 98 and Me on their own can't route IP traffic between two network interface adapters, but you can use systems running these operating systems as dial-in servers that enable you to access a network from a remote location using the NetBIOS Enhanced User Interface (NetBEUI) or Internetwork Packet Exchange (IPX) protocols. Windows 98 Second Edition and Windows Me also include an Internet Connection Sharing (ICS) feature, which enables other computers on the LAN to access the Internet through one computer's dial-up connection to an Internet Service Provider (ISP). There are also third-party software products that provide internet connection sharing. In essence, these products are software routers that enable your computer to forward packets between the local network and the network run by your ISP. Using these products, all of the computers on a LAN, such as one installed in a home or a small business, can share a single computer's connection to the Internet, whether it uses a dial-up modem, cable modem or other type of connection. When you use a computer as an IP router, each of the network interface adapters must have its own IP address appropriate for the network to which it is attached. When one of the two networks is an ISP connection, the ISP's server typically supplies the address for that interface. The other IP address is the one that you assign to your network interface adapter when you install it. A stand-alone router is a hardware device that is essentially a special-purpose computer. The unit has multiple built-in network interface adapters, a processor and memory in which it stores its routing information and temporary packet buffers.

Routing tables

The routing table is the heart of any router, without it, all that's left is mechanics. The routing table holds the information that the router uses to forward packets to the proper destinations. However, it is not only routers that have routing tables, every TCP/IP system has one, which it uses to determine where to send its packets. Routing is essentially the process of determining what data-link layer protocol address the system should use to reach a particular IP address. If a system wants to transmit a packet to a computer on the local network, for example, the routing table instructs it to address the packet directly to that system. This is called a direct route. In this case, the Destination IP Address field in the IP header and the Destination Address field in the data-link layer protocol header refer to the same computer. If a packet's destination is on another network, the routing table contains the address of the router that the system should use to reach that destination. In this case, the Destination IP Address and Destination Address fields specify different systems because the data-link layer address has to refer to a system on the local network and for the packet to reach a computer on a different network, that local system must be a router. Because the two addresses refer to different systems, this is called an indirect route.

Routing table format

A routing table is essentially a list of networks (and possibly hosts) and addresses of routers that the system can use to reach them. The arrangement of the information in the routing table can differ depending on the operating system. The functions of the various columns in the table for a Win 2000 system are as follows:

Network Address
This column specifies the address of the network or host for which routing information is provided in the other columns.

Netmask
This column specifies the subnet mask for the value in the Network Address column. As with any subnet mask, the system uses the Netmask value to determine which parts of the Network Address value are the network identifier, the subnet identifier (if any) and the host identifier.

Gateway Address
This column specifies the address of the router that the system should use to send datagrams to the network or host identified in the Network Address column. On a LAN, the hardware address for the system identified by the Gateway Address value will become the Destination Address value in the packet's data-link layer protocol header.

Interface
This column specifies the address of the network interface adapter that the computer should use to transmit packets to the system identified in the Gateway Address column.

Metric
This column contains a value that enables the system to compare the relative efficiency of routes to the same destination.

The IP protocol selects a route using the following procedure:

1. After packaging the transport layer information into a datagram, IP compares the Destination IP Address for the packet with the routing table, looking for a host address with the same value. A host address entry in the table has a full IP address in the Network Address column and the value 255.255.255.255 in the Netmask column.

2. If there is no host address entry that exactly matches the Destination IP Address value, the system then scans the routing table's Network Address and Netmask columns for an entry that matches the address's network and subnet identifiers. If there is more than one entry in the routing table that contains the desired network and subnet identifiers, IP uses the entry with the lower value in the Metric column.

3. If there are no table entries that match the network and subnet identifiers of the Destination IP Address value, the system searches for a default gateway entry that has a value of 0.0.0.0 in the Network Address and Netmask columns.

4. If there is no default gateway entry, the system generates an error message. If the system transmitting the datagram is a router, it transmits an Internet Control Message Protocol (ICMP) Destination Unreachable message back to the end system that originated the datagram. If the system transmitting the datagram is itself an end system, the error message gets passed back up to the application that generated the data.


5. When the system locates a viable routing table entry, IP prepares to transmit the datagram to the router identified in the Gateway Address column. The system consults the Address Resolution Protocol (ARP) cache or performs an ARP procedure to obtain the hardware address
of the router.

6. Once it has the router's hardware address, IP passes it and the datagram down to the data-link layer protocol associated with the address specified in the Interface column. The data-link layer protocol constructs a frame using the router's hardware address in its Destination Address field and transmits it out over the designated interface.

Static and dynamic routing

There are two techniques for updating the routing table. Static routing is the process by which a network administrator manually creates routing table entries using a program designed for this purpose. Dynamic routing is the process by which routing table entries are automatically created by specialized routing protocols that run on the router systems. Two examples of these protocols are the Routing Information Protocol (RIP) and the Open Shortest Path First (OSPF) protocol. Routers use these protocols to exchange messages containing routing information with other nearby routers. Each router is, in essence, sharing its routing table with other routers. It should be obvious that, although static routing can be an effective routing solution on a small internetwork, it isn't a suitable solution for a large installation. If you have a network with a configuration that never changes, or one in which there is only one possible route to each destination, running a routing protocol is a waste of energy and bandwidth. The advantage of dynamic routing, in addition to reducing the network administrator's workload, is that it automatically compensates for changes in the network infrastructure. If a particular router goes down, for example, its failure to communicate with the other routers nearby means that it will eventually be deleted from their routing tables and packets will take different routes to their destinations. If and when that router comes back online, it resumes communications with the other routers and is again added to their tables.

Creating a Static Route

Creating static routes is a matter of using a utility supplied with the TCP/IP protocol to create (or delete) entries in the routing table. In most cases, the utility runs from the command line. UNIX systems use a program called route, and the Windows operating systems use a similar program called ROUTE.EXE. Both of these utilities use roughly the same syntax. The samples that follow are for the ROUTE.EXE program of Windows 2000. Stand-alone routers run their own proprietary software that uses a command set created by the manufacturer.
The syntax for ROUTE.EXE is as follows:

ROUTE [-f] [-p] [command [destination] [MASK netmask] [gateway] [METRIC metric] [IF interface]]

-f
This parameter deletes all of the entries from the routing table. When used with the ADD command, it deletes the entire table before adding the new entry.

-p
When used with the ADD command, this parameter creates a persistent route entry in the table. A persistent route is one that remains in the table permanently, even after the system is restarted. When –p is used with the PRINT command, the system displays only persistent routes.

command
This variable contains a keyword that specifies the function of the command.

destination
This variable specifies the network or host address of the table entry being managed.

MASK netmask
The variable netmask specifies the subnet mask to be applied to the address specified by the destination variable.

gateway
This variable specifies the address of the router that the system should use to reach the host or network specified by the destination variable.

METRIC metric
The variable metric specifies a value that indicates the relative efficiency of the route in the table entry.

IF interface
The variable interface specifies the address of the network interface adapter that the system should use to reach the router specified by the gateway variable.

ROUTE.EXE's command variable takes one of four values, which are as follows:

PRINT. This value displays the contents of the routing table. When used with the –p parameter, it displays only the persistent routes in the routing table.
ADD. This value creates a new entry in the routing table.
DELETE. This value deletes an existing entry from the routing table.
CHANGE. This value modifies the parameters of an entry in the routing table.

The ROUTE PRINT command displays the current contents of the routing table. To delete an entry, you use the ROUTE DELETE command with a destination parameter to identify the entry you want to remove. To create a new entry in the table, you use the ROUTE ADD command with parameters that specify the values for the entry. The ROUTE CHANGE command works in the same way, except that it modifies the table entry specified by the destination variable. The destination variable is the address of the network or host for which you are providing routing information. The other parameters contain the subnet mask, gateway, interface
and metric information.


Routing and remote access

In addition to their normal routing capabilities, Windows 2000 Server and Windows NT Server 4.0 can use an additional service called the Routing and Remote Access Service (RRAS), which expands their routing capabilities. RRAS is provided with the Windows 2000 Server operating system and is available as free add-on for Windows NT Server 4.0. Among other things, Routing and Remote Access provides support for the RIP version 2 and OSPF routing protocols, ICMP router discovery, demand dialing, and the Point-to-Point Tunneling Protocol (PPTP) for virtual private network (VPN) connections. With Routing and Remote Access, you can view the server's routing table as well as those of other systems running the service, and you can create static routes using a standard Windows dialog box rather than the command line.

Dynamic routing

A router only has direct knowledge of the networks to which it is connected. When a network has two or more routers connected to it, dynamic routing enables each of the routers to know about the others and creates routing table entries that specify the networks to which the other routers are connected. For example, Router A can have direct knowledge of Router B from routing protocol broadcasts, because both are connected to the same network. Router B has knowledge of Router A for the same reason, but it also has knowledge of Router C, because Router C is on another network to which Router B is connected. Router A has no direct knowledge of Router C, because they are in different broadcast domains, but by using a dynamic routing protocol, Router B can share its knowledge of Router C with Router A, enabling A to add C to its routing table. By sharing the information in their routing tables using a routing protocol, routers obtain information about distant networks and can route packets more efficiently
as a result.

There are many different routing protocols in the TCP/IP suite. On a private internetwork, a single routing protocol like RIP is usually sufficient to keep all of the routers updated with the latest network information. On the Internet, however, routers use various protocols, depending on their place in the network hierarchy. Routing protocols are generally divided into two categories: interior gateway protocols (IGPs) and exterior gateway protocols (EGPs). On the Internet, a collection of networks that fall within the same administrative domain is called an autonomous system (AS). The routers within an autonomous system all communicate using an IGP selected by the administrators and EGPs are used for communications between
autonomous systems.

RIP

The Routing Information Protocol (RIP) is the most commonly used IGP in the TCP/IP suite and on networks around the world. Originally designed for UNIX systems in the form of a daemon called routed (pronounced route-dee), RIP was eventually ported to many other platforms and standardized in Request for Comments (RFC) 1058 by IETF. Some years later, RIP was updated to a version 2, which was published as RFC 2453. Most RIP exchanges are based on two message types, requests and replies, both of which are packaged in User Datagram Protocol (UDP) packets addressed to the IANA-assigned well-known port number 520. When a RIP router starts, it generates a RIP request and transmits it as a broadcast over all of its network interfaces. On receiving the broadcast, every other router on either network that supports RIP generates a reply message that contains its routing table information. A reply message can contain up to 25 routes, each of which is 20 bytes long. If the routing table contains more than 25 entries, the router generates multiple reply messages until it has transmitted the entire table. When it receives the replies, the router integrates the information in them into its own routing table.

The metric value included with each table entry determines the efficiency of the route based on the number of hops required to reach the destination. When routers receive routing table entries from other routers using RIP, they increment the value of the metric for each route to reflect the additional hop required to reach the destination. The maximum value for a metric in a RIP message is 15. Routing that uses metrics based on the number of hops to the destination is called distance vector routing. After their initial exchange of RIP messages, routers transmit updates every 30 seconds to ensure that all of the other routers on the networks to which they are connected have current information. If a RIP-supplied routing table entry is not refreshed every 3 minutes, the router assumes that the entry is no longer viable, increases its metric to 16 (an illegal value), and eventually removes it from the table completely. The frequent retransmission of routing data is the main reason that RIP is criticized. The protocol generates a large amount of redundant broadcast traffic. In addition, the message format does not support the inclusion of a subnet mask for each route. Instead, RIP applies the subnet mask of the interface over which it receives each route, which may not always be accurate. RIP version 2 is designed to address these problems. The primary difference between RIP 1 and RIP 2 is the format of the routes included in the reply messages. The RIP 2 message is no larger than that of RIP 1, but it utilizes the unused fields from RIP 1 to include additional information about each route. The functions of the RIP version 2 route fields are as follows:


Address Family Identifier (2 bytes). This field contains a code that identifies the protocol for which routing information is being provided. The code for IP is 2. (RIP supports other protocols besides IP).

Route Tag (2 bytes). This field contains an autonomous system number that enables RIP to communicate with exterior gateway protocols.

IP Address (4 bytes). This field specifies the address of the network or host for which routing information is being provided.

Subnet Mask (4 bytes). This field contains the subnet mask that the router should apply to the IP Address value.

Next Hop IP Address (4 bytes). This field specifies the address of the gateway that the router should use to forward traffic to the network or host specified in the IP Address field.

Metric (4 bytes). This field contains a value that specifies the relative efficiency of the route.

The other main difference between RIP version 1 and RIP version 2 is that the latter supports the use of multicast transmissions. A multicast address is a single address that represents a group of computers. By using a multicast address that represents all of the routers on the network instead of broadcasts, the amount of extraneous traffic processed by the other computers is greatly reduced.


OSPF

Judging routes by the number of hops required to reach a destination is not always very efficient. A hop can refer to anything from a Gigabit Ethernet connection to a dial-up line, so it is entirely possible for traffic moving over a route with a smaller number of hops to take longer than one with more hops. There is another type of routing called link-state routing that measures the actual properties of each connection and stores the information in a database that is shared among the routers on the network. The most common IGP that uses this method is the Open Shortest Path First (OSPF) protocol, as defined in RFC 2328. OSPF has many other advantages over RIP as well, including the ability to update routing tables more quickly when changes occur on the network (called convergence), the ability to balance the network load by splitting traffic between routes with equal metrics, and authentication of routing
protocol messages.

September 5, 2005

IP addressing

An IP address is a 32-bit value that contains both a network identifier and a host identifier. The address is notated using four decimal numbers ranging from 0 to 255, separated by periods, as in 192.168.1.44. This is known as dotted decimal notation. Each of the four values is the decimal equivalent of an 8-bit binary value.

IP addresses represent network interface adapters, of which there can be more than one in a computer. A router, for example, has interfaces to at least two networks and must therefore have an IP address for each of those network interface adapters. Workstations typically have only a single LAN interface, but in some cases, they use a modem to connect to another network, such as the Internet. When this is the case, the modem interface has its own separate IP address (usually assigned by the server at the other end of the modem connection) in addition to that of the LAN connection. If other systems on the LAN access the Internet through that computer's modem, that system is actually functioning as a router.

IP Address Assignments

Unlike hardware addresses, which are hard-coded into network interface adapters at the factory, network administrators must assign IP addresses to the systems on their networks. It is essential for each network interface adapter to have its own unique IP address; when two systems have the same IP address, they cannot communicate with the network properly. IP addresses consist of a network identifier and a host identifier. All of the network interface adapters on a particular subnet have the same network identifier but different host identifiers. For systems that are on internet, the Internet Assigned Numbers Authority (IANA) assigns network identifiers to ensure that there is no address duplication on internet. When an organization registers it's network, it is assigned a network identifier. It is then up to the network administrators to assign unique host identifiers to each of the systems on that network. This two-tiered system of administration is one of the basic organizational principles of the Internet. Domain names are assigned in the same way.

IP Address Classes

The most complicated aspect of an IP address is that the division between the network identifier and the host identifier is not always in the same place. A hardware address, for example, consists of 3 bytes assigned to the manufacturer of the network adapter and 3 bytes that the manufacturer itself assigns to each card. IP addresses can have various numbers of bits assigned to the network identifier, depending on the size of the network.

IP Address Classes and Parameters

Class First Bits First Byte Values Network ID Bits Host ID Bits # Networks # Hosts
A 0 1-127 8 24 126 16,777,2 14
B 10 128-191 16 16 16,384 65,534
C 110 192-223 24 8 2,097,15 2 254


The numbers for supported networks and hosts might appear low. An 8-bit binary number can have 256 possible values, for example, not 254. However, there are a few IP addressing rules that exclude some possible values:

All the bits in the network identifier cannot be set to zeros.
All the bits in the network identifier cannot be set to ones.
All the bits in the host identifier cannot be set to zeros.
All the bits in the host identifier cannot be set to ones.


The binary values of the first bits of each address class determine the possible decimal values for the first byte of the address. For example, because the first bit of Class A addresses must be 0, the binary values of the first byte range from 00000001 to 01111111, which in decimal form is 1 to 127. Thus, when you see an IP address in which the first byte is a number from 1 to 127, you know that this is a Class A address. In a Class A address, the network identifier is the first 8 bits and the host identifier is the remaining 24 bits. This means that there are only 126 possible Class A networks (network identifier 127 is reserved for diagnostic purposes), but each network can have up to 16,777,214 network interface adapters on it. Class B and Class C addresses devote more bits to the network identifier, which means that they support a greater number of networks, but at the cost of having fewer host identifier bits. This reduces the number of hosts on each network.

Subnet masking

It may at first seem odd that IP address classes are assigned in this way. After all, there aren't any private networks that have 16 million hosts on them, so it makes little sense even to have Class A addresses. However, it's possible to subdivide IP addresses even further by creating subnets on them. A subnet is simply a subdivision of a network address that can be used to represent one LAN on an internetwork or the network of one of an ISP's clients. Thus, a large ISP might have a Class A address registered to it, and it might farm out pieces of the address to its clients in the form of subnets. In many cases, a large ISP's clients are smaller ISPs, which in turn supply addresses to their own clients. To understand the process of creating subnets, you must understand the function of the subnet mask. When you configure a TCP/IP system, you assign it an IP address and a subnet mask. Simply put, the subnet mask specifies which bits of the IP address are the network identifier and which bits are the host identifier. For a Class A address, for example, the correct subnet mask value is 255.0.0.0. When expressed as a binary number, a subnet mask's 1 bits indicate the network identifier, and its 0 bits indicate the host identifier. A mask of 255.0.0.0 in binary form is as follows: 11111111 00000000 00000000 00000000. Thus, this mask indicates that the first 8 bits of a Class A IP address are the network identifier bits and the remaining 24 bits are the host identifier.

Subnet masks for IP Address Classes

Class Subnet mask
A 255.0.0.0
B 255.255.0.0
C 255.255.255.0


If all addresses of a particular class used the same number of bits for the network and host identifiers, there would be no need for a subnet mask. The value of the first byte of the address would indicate its class. However, you can create multiple subnets within a given address class by using a different mask. If, for example, you have a Class B address, using a subnet mask of 255.255.0.0 would allocate the first 16 bits for the network identifier and the last 16 bits for the host identifier. If you use a mask of 255.255.255.0, you allocate an additional 8 bits to the network identifier, which you are borrowing from the host identifier. The third byte of the address thus becomes a subnet identifier, as shown in Figure 8.6. You can create up to 254 subnets using that one Class B address, with up to 254 network interface adapters on each subnet. An IP address of 131.24.67.98 would therefore indicate that the network is using the Class B address 131.24.0.0, and that the interface is host number 98 on subnet 67. A large corporate network might use this scheme to create a separate subnet for each of its LANs.

However, the boundary between the network identifier and the host identifier does not have to fall in between two bytes. An IP address can use any number of bits for its network address, and more complex subnet masks are required in this type of environment. Suppose, for example, you have a Class C network address of 199.24.65.0 that you want to subnet. There are already 24 bits devoted to the network address, and you obviously can't allocate the entire fourth byte as a subnet identifier, or there would be no bits left for the host identifier. You can, however, allocate part of the fourth byte. If you use 4 bits of the last byte for the subnet identifier, you have 4 bits left for your host identifier. To do this, the binary form of your subnet mask must appear as follows: 11111111 11111111 11111111 11110000.

The decimal equivalent of this binary value is 255.255.255.240 because 240 is the decimal equivalent of 11110000. This leaves you with a 4-bit subnet identifier and a 4-bit host identifier, which means that you can create up to 14 subnets with 14 hosts on each one. (Subnet identifiers have the same rules about not using all ones or all zeros as do network identifiers and host identifiers.) Figuring out the correct subnet mask for this type of configuration is relatively easy. Figuring out the IP addresses you must assign to your workstations is harder. To do this, you have to increment the 4 subnet bits separately from the 4 host bits. Once again, this is easier to understand when you look at the binary values. The 4-bit subnet identifier can have any one of the following 14 values: 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110.

Each one of these subnets can have up to 14 workstations, with each host identifier having one of the values from that same set of 14 values. Thus, to calculate the value of the IP address's fourth byte, you must combine the binary values of the subnet and host identifiers and convert them to decimal form. For example, the first host (0001) on the first subnet (0001) would have a fourth byte binary value of 00010001, which in decimal form is 17. Thus, the IP address for this system would be 199.24.65.17 and its subnet mask would be 255.255.255.240. The last host on the first subnet would use 1110 as its host identifier, making the value of the fourth byte 00011110 in binary form, or 30 in decimal form, for an IP address of 199.24.65.30. Then, to proceed to the second subnet, you increment the subnet identifier to 0010 and the host identifier back to 0001, for a binary value of 00100001, or 33 in decimal form. Therefore, the IP addresses you use on a network like this do not increment normally. You must compute them carefully to create the correct values.

Registered and Unregistered addresses

Registered IP addresses are required for computers that are accessible from the Internet but not every computer that is connected to the Internet. For security reasons, networks typically use a firewall or some other technology to protect their systems from intrusion by outside computers. These firewalls use various techniques that provide workstations with access to Internet resources without making them accessible to other systems on the Internet.
These workstations typically use unregistered private IP addresses, which the network administrator can freely assign without obtaining them from an ISP or the IANA. There are special network addresses in each class, that are intended for use on private networks and are not registered to anyone.


IP Addresses for Private Networks

Class Network address
A 10.0.0.0 through 10.255.255.255
B 172.16.0.0 through 172.31.255.255
C 192.168.0.0 through 192.168.255.255

IPv6 addressing

When IP was originally designed, no one could have predicted the growth that the Internet has experienced in recent years. The 32-bit address space allotted to IP, which once seemed so enormous, is now in danger of being depleted. To address this problem, work is proceeding on an upgrade to IP version 4 (the current version), known as IP version 6, or IPv6. In IPv6, the address space is increased from 32 to 128 bits, which is large enough to provide a minimum of 1564 addresses for each square meter of the Earth's surface.
IPv6 addresses are notated as follows: XX:XX:XX:XX:XX:XX:XX:XX.
Each X is a hexadecimal representation of a single byte, so some examples of IPv6 would be as follows: FEDC:BA98:7654:3210:FEDC:BA98:7654:3210
1080:0:0:0:8:800:200C:417A.
Leading zeros can be omitted from individual byte values, and repeated zero byte values can be replaced with the '::' symbol (but only once in an address). This means that the second address listed here can also be expressed as follows: 1080::8:800:200C:417A.
The IPv6 unicast addresses assigned to registered computers are split into six variable-length sections instead of the two or three sections used in IPv6 addresses.


These sections are as follows:

Format prefix. This section specifies the type of address, such as provider-based unicast or multicast. (There is also a new type of address called an anycast that causes a message to be sent to only one of a specified group of interfaces.)
Registry ID. This section identifies the Internet address registry that assigned the Provider ID.
Provider ID. This section identifies the ISP that assigned this portion of the address space to a particular subscriber.
Subscriber ID. This section identifies a particular subscriber to the service provided by the ISP specified in the Provider ID field.
Subnet ID. This section identifies all or part of a specific physical link on the subscriber's network. Subscribers can create as many subnets as needed.
Interface ID. This section identifies a particular network interface on the subnet specified in the Subnet ID field.

TCP/IP protocols

The TCP/IP protocols were developed in the 1970s specifically for use on a packet-switching network built for the United States Department of Defense. Their network was then known as the ARPANET, which is now the Internet. The TCP/IP protocols have also been associated with the UNIX operating systems since early in their inception. Thus, these protocols predate the personal computer, the OSI reference model, the Ethernet protocol and most other elements that are today considered the foundations of computer networking. Unlike the other protocols that perform some of the same functions, such as Novell's Internetwork Packet Exchange (IPX), TCP/IP was never the product of a single company, but rather a collaborative effort.

In addition to not being restrained in any way by copyrights, trademarks, or other publishing restrictions, the nonproprietary nature of the TCP/IP standards also means that the protocols are not limited to any particular computing platform, operating system, or hardware implementation. This platform independence was the chief guiding principle of the TCP/IP development effort, and many of the protocol suite's features are designed to make it possible for any computer with networking capabilities to communicate with any other networked computer using TCP/IP.

TCP/IP Layers
The TCP/IP protocols were developed long before the OSI reference model was, but they operate using layers in much the same way. Splitting the networking functionality of a computer into a stack of separate protocols rather than creating a single monolithic protocol provides several advantages, including the following:

Platform independence. Separate protocols make it easier to support a variety of computing platforms. Creating or modifying protocols to support new physical layer standards or networking application programming interfaces (APIs) doesn't require modification of the entire protocol stack.

Quality of service. Having multiple protocols operating at the same layer makes it possible for applications to select the protocol that provides only the level of service required.

Simultaneous development. Because the stack is split into layers, the development of the various protocols can proceed simultaneously, using personnel who are uniquely qualified in the operations of the particular layers.

The four TCP/IP layers are as follows:

Link.
The TCP/IP protocol suite includes rudimentary link layer protocols, such as the Serial Line Internet Protocol (SLIP) and the Point-to-Point Protocol (PPP). However, TCP/IP does not include physical layer specifications or complex local area network (LAN) protocols such as Ethernet and Token Ring. Therefore, although TCP/IP does maintain a layer that is comparable to the OSI data-link layer, in most cases the protocol operating at that layer is not part of the TCP/IP suite. TCP/IP does, however, include ARP, which can be said to function at least partially at the link layer, because it provides services to the internet layer above it.

Internet.
The internet layer is exactly equivalent to the network layer of the OSI model. IP is the primary protocol operating at this layer and it provides data encapsulation, routing, addressing, and fragmentation services to the protocols at the transport layer above it. Two additional protocols, called the Internet Control Message Protocol (ICMP) and the Internet Group Message Protocol (IGMP), also operate at this layer, as do some of the specialized dynamic
routing protocols.


Transport.
The transport layer is equivalent to the layer of the same name in the OSI model. The TCP/IP suite includes two protocols at this layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP), which provide connection-oriented and connectionless data transfer services, respectively.

Application.
The TCP/IP protocols at the application layer can take several different forms. Some protocols, such as the File Transfer Protocol (FTP), can be applications in themselves, whereas others, such as Hypertext Transfer Protocol (HTTP), provide services to applications.

SLIP and PPP

SLIP and PPP are link layer protocols that systems use for wide area connections using telephone lines and other types of physical layer connections. SLIP is defined in RFC 1055, 'A Nonstandard for Transmission of IP Datagrams over Serial Lines'. PPP is more complex than SLIP and uses additional protocols to establish a connection between two systems. These protocols are defined in separate documents, including RFC 1661, 'The Point-to-Point Protocol', and RFC 1662, 'PPP in HDLC-Like Framing'.

ARP

ARP, as defined in RFC 826, 'Ethernet Address Resolution Protocol', occupies an unusual place in the TCP/IP suite. ARP provides a service to IP, which seems to place it in the link layer (or the data-link layer of the OSI model). However, it's messages are carried directly by data-link layer protocols and are not encapsulated within IP datagrams, which is a good reason for calling it an internet (or network) layer protocol. Whichever layer you assign it to, ARP provides an essential service when TCP/IP is running on a LAN. The TCP/IP protocols rely on IP addresses to identify networks and hosts, but when the computers are connected to an Ethernet or Token Ring LAN, they must eventually transmit the IP datagrams using the destination system's data-link layer hardware address. ARP provides the interface between the IP addressing system used by IP and the hardware addresses used by the data-link layer protocols. When IP constructs a datagram, it knows the IP address of the system that is the packet's ultimate destination. That address may identify a computer connected to the local network or a system on another network. In either case, IP must determine the hardware address of the system on the local network that will receive the datagram next. To do this, IP generates an ARP message and broadcasts it over
the LAN.


The functions of the ARP message fields are as follows:

Hardware Type (2 bytes). This field identifies the type of hardware addresses in the Sender Hardware Address and Target Hardware Address fields. For Ethernet and Token Ring networks, the value is 1.
Protocol Type (2 bytes). This field identifies the type of addresses in the Sender Protocol Address and Target Protocol Address fields. The hexadecimal value for IP addresses is 0800 (the same as the Ethertype code for IP).
Hardware Size (1 byte). This field specifies the size of the addresses in the Sender Hardware Address and Target Hardware Address fields, in bytes. For Ethernet and Token Ring networks, the value is 6.
Protocol Size (1 byte). This field specifies the size of the addresses in the Sender Protocol Address and Target Protocol Address fields, in bytes. For IP addresses, the value is 4.
Opcode (2 bytes). This field specifies the function of the packet: ARPRequest, ARP Reply, RARP Request, or RARP Reply.
Sender Hardware Address (6 bytes). This field contains the hardware address of the system generating the message.
Sender Protocol Address (4 bytes). This field contains the IP address of the system generating
the message.
Target Hardware Address (6 bytes). This field contains the hardware address of the system for which the message is destined. In ARP Request messages, this field is left blank.
Target Protocol Address (4 bytes). This field contains the IP address of the system for which the message is intended.


The process by which IP uses ARP to discover the hardware address of the destination system is as follows:

IP packages transport layer information into a datagram, inserting the IP address of the destination system into the Destination IP Address field of the IP header.
IP compares the network identifier in the destination IP address to its own network identifier and determines whether to send the datagram directly to the destination host or to a router on the local network.
IP generates an ARP Request packet containing its own hardware address and IP address in the Sender Hardware Address and Sender Protocol Address fields. The Target Protocol Address field contains the IP address of the datagram's next destination (host or router), as determined in step 2. The Target Hardware Address Field is left blank.
The system passes the ARP Request message down to the data-link layer protocol, which encapsulates it in a frame and transmits it as a broadcast to the entire local network.
The systems on the LAN receive the ARP Request message and read the contents of the Target Protocol Address field. If the Target Protocol Address value does not match the system's own IP address, it silently discards the message and takes no further action.
If the system receiving the ARP Request message recognizes its own IP address in the Target Protocol Address field, it generates an ARP Reply message. The system copies the two sender address values from the ARP Request message into the respective target address values in the ARP Reply and copies the Target Protocol Address value from the request into the Sender Protocol Address field in the reply. The system then inserts its own hardware address into the Sender Hardware Address field.
The system transmits the ARP Reply message as a unicast message back to the computer that generated the request, using the hardware address in the Target Hardware Address field.
The system that originally generated the ARP Request message receives the ARP Reply and uses the newly supplied value in the Sender Hardware Address field to encapsulate the datagram in a data-link layer frame and transmit it to the desired destination as a unicast message.
The ARP specification requires TCP/IP systems to maintain a cache of hardware addresses that the system has recently discovered using the protocol. This prevents systems from flooding the network with separate ARP Request broadcasts for each datagram transmitted. When a system transmits a file in multiple TCP segments, for example, only one ARP transaction is usually required, because IP checks the ARP cache for a hardware address before generating a new ARP request. The interval during which unused ARP information remains in the cache is left up to the individual implementation, but it is usually relatively short to prevent the system from using outdated address information.


IP

IP is the protocol responsible for carrying the data generated by nearly all of the other TCP/IP protocols from the source system to its ultimate destination.

ICMP

The Internet Control Message Protocol (ICMP), as defined in RFC 792, is another protocol that IP uses to perform network administration tasks. ICMP is considered to be an internet (or network) layer protocol, despite the fact that it carries no application data and its messages are carried within IP datagrams. Although it uses only one message format, ICMP performs many different functions, which are generally divided into errors and queries.

The functions of the ICMP message fields are as follows:

Type (1 byte). This field contains a code that specifies the basic function of the message.
Code (1 byte). This field contains a code that indicates the specific function of the message.
Checksum (2 bytes). This field contains a checksum computed on the entire ICMP message that is used for error detection.
Data (variable). This field may contain information related to the specific function of
the message.


ICMP Error Message Types

Reporting errors of various types is the primary function of ICMP. IP is a connectionless protocol, so there are no internet/network layer acknowledgments returned to the sending system, and even the transport layer acknowledgments returned by TCP are generated only by the destination end system. ICMP functions as a monitor of internet layer communications, enabling intermediate or end systems to return error messages to the sender. For example, when a router has a problem processing a datagram during the journey to its destination, it generates an ICMP message and transmits it back to the source system. The source system may then take action to alleviate the problem in response to the ICMP message. The Data field in an ICMP error message contains the entire 20-byte IP header of the datagram that caused the problem, plus the first 8 bytes of the datagram's own Data field.

Destination Unreachable Messages

When an intermediate or end system attempts to forward a datagram to a resource that is inaccessible, it can generate an ICMP Destination Unreachable message and transmit it back to the source system. Destination Unreachable messages all have a Type value of 3; the Code value specifies exactly what resource is unavailable, using the values shown in Table 8.1. For example, when a router fails to transmit a datagram to the destination system on a local network, it returns a Host Unreachable message to the sender. If the router can't transmit the datagram to another router, it generates a Net Unreachable message. If the datagram reaches the destination system but the designated transport layer or application layer protocol is unavailable, the system returns a Protocol Unreachable or Port Unreachable message.


Code Description:
0 Net Unreachable;
1 Host Unreachable;
2 Protocol Unreachable;
3 Port Unreachable;
4 Fragmentation Needed And Don't Fragment Was Set;
5 Source Route Failed;
6 Destination Network Unknown;
7 Destination Host Unknown;
8 Source Host Isolated;
9 Communication With Destination Network Is Administratively Prohibited;
10 Communication With Destination Host Is Administratively Prohibited;
11 Destination Network Unreachable For Type Of Service;
12 Destination Host Unreachable For Type Of Service.


Source Quench Messages

Source Quench messages have a Type value of 4 and function as a rudimentary flow control mechanism for the internet layer. When a router's memory buffers are nearly full, it can send a Source Quench message to the source system, which instructs it to slow down its transmission rate. When the Source Quench messages cease, the sending system can gradually increase the rate again.

Redirect Messages

Routers generate ICMP Redirect messages to inform a host or another router that there is a more efficient route to a particular destination. Many internetworks have a matrix of routers that enables packets to take different paths to a single destination. If System 1 sends a packet to Router A in an attempt to get it to System 2, Router A forwards the packet to Router B, but it also transmits an ICMP Redirect message back to System 1, informing it that it can send packets destined for System 2 directly to Router B.

The ICMP Redirect message's Data field contains the usual 28 bytes from the datagram in question (the 20-byte IP header plus eight bytes of IP data) plus an additional 4-byte Gateway Internet Address field, which contains the IP address of the router that the system should use from now on when transmitting datagrams to that particular destination. By altering its practices, the source system saves a hop on the packet's path through the internetwork and lessens the processing burden on Router A.

Time Exceeded Messages

When a TCP/IP system creates an IP datagram, it inserts a value in the IP header's Time To Live (TTL) field that each router processing the datagram reduces by one during the packet's journey through the internetwork. If the TTL value reaches zero during the journey, the last router to receive the packet discards it and transmits an ICMP Time Exceeded (Type 11, Code 0) message back to the sender, informing it that the packet has not reached its destination and telling it why. This is called a Time To Live Exceeded In Transit message.

Another type of Time Exceeded message is used when a destination system is attempting to reassemble datagram fragments and one or more fragments fail to arrive in a timely manner. The system then generates a Fragment Reassembly Time Exceeded (Type 11, Code 1) message and sends it back to the source system.

ICMP Query Message Types

The other function of ICMP messages is to carry requests to another system for some type of information and also to return the replies containing that information. These ICMP query messages are not reactions to an outside process, as error messages are. However, external programs, such as the TCP/IP Ping utility, can generate query messages as part of their functionality. Because query messages aren't generated in response to an external problem, their Data fields do not contain the IP header information from another datagram. Instead, the various types of query messages include more divergent information in the Data field, according to their functions. The following sections examine the most important query message types.

Echo Request and Echo Reply Messages

The Echo Request (Type 8, Code 0) and Echo Reply (Type 0, Code 0) messages form the basis for the TCP/IP Ping utility and are essentially a means to test whether another system on the network is up and running. Both messages contain 2-byte Identifier and 2-byte Sequence Number subfields in the Data field, which are used to associate requests and replies, plus a certain amount of padding, as dictated by the Ping program. Ping functions by generating a series of Echo Request messages and transmitting them to a destination system specified by the user. The destination system, on receiving the messages, reverses the values of the Source IP Address and Destination IP Address fields, changes the Type value from 8 to 0, recalculates the checksum, and transmits the messages back to the sender. When Ping receives the Echo Reply messages, it assumes that the destination system is functioning properly.

Router Solicitation and Router Advertisement Messages

Router Solicitation (Type 10, Code 0) and Router Advertisement (Type 9, Code 0) messages cannot truly constitute a routing protocol, because they don't provide information about the efficiency of particular routes, but they do enable a TCP/IP system to discover the address of a default gateway on the local network. The process begins with a workstation broadcasting a Router Solicitation message to the local network. The routers on the network respond with unicast Router Advertisement messages containing the router's IP address and other information. The workstation can then use the information in these replies to configure the default gateway entry in its routing table.

TCP and UDP

TCP and UDP are the TCP/IP transport layer protocols. All application layer protocols use either TCP or UDP to transmit data across the network, depending on the services they require.

Application Layer Protocols

The protocols that operate at the application layer are no longer concerned with the network communication issues addressed by the link, internet, and transport layer protocols. These protocols are designed to provide communications between client and server services on different computers and are not concerned with how the messages get to the other system.
Application layer protocols use different combinations of protocols at the lower layers to achieve the level of service they require. For example, servers use HTTP and FTP to transmit entire files to client systems, and it is essential that those files be received without error. These protocols, therefore, use a combination of TCP and IP to achieve connection-oriented, reliable communications. DHCP and Domain Name System (DNS), on the other hand, exchange small messages between clients and servers that can easily be retransmitted if necessary, so they use the connectionless service provided by UDP and IP.


Some of the most commonly used TCP/IP application layer protocols are as follows:

Hypertext Transfer Protocol (HTTP). HTTP is the protocol used by Web clients and servers to exchange file requests and files. A client browser opens a TCP connection to a server and requests a particular file. The server replies by sending that file, which the browser displays as a home page. HTTP messages also contain a variety of fields containing information about the communicating systems.
Secure Hypertext Transfer Protocol (S-HTTP or HTTPS). HTTPS is a security protocol that works with HTTP to provide user authentication and data encryption services to Web
client/server transactions.
File Transfer Protocol (FTP). FTP is a protocol used to transfer files between TCP/IP systems. An FTP client can browse through the directory structure of a connected server and select files to download or upload. FTP is unique in that it uses two separate ports for its communications. When an FTP client connects to a server, it uses TCP port 21 to establish a control connection. When the user initiates a file download, the program opens a second connection using port 20 for the data transfer. This data connection is closed when the file transfer is complete, but the control connection remains open until the client terminates it. FTP is also unusual in that on most TCP/IP systems, it is a self-contained application rather than a protocol used by
other applications.
Trivial File Transfer Protocol (TFTP). TFTP is a minimalized, low-overhead version of FTP that can transfer files across a network. However, it uses the UDP protocol instead of TCP and does not include FTP's authentication and user interface features. TFTP was originally designed for use on diskless workstations that had to download an executable system file from a network
server to boot.
Simple Mail Transport Protocol (SMTP). SMTP is the protocol that e-mail servers use to transmit messages to each other across a network.
Post Office Protocol 3 (POP3). POP3 is one of the protocols that e-mail clients use to retrieve their messages from an e-mail server.
Internet Mail Access Protocol 4 (IMAP4). IMAP is an e-mail protocol that clients use to access mail messages on a server. IMAP expands on the capabilities of POP3 by adding services such as the ability to store mail in individual folders created by the user on the server, rather than downloading it to an e-mail client.
Network Time Protocol (NTP). NTP is a protocol that enables computers to synchronize their clocks with other computers on the network by exchanging time signals.
Domain Name System (DNS). TCP/IP systems use DNS to resolve Internet host names to the IP addresses they need to communicate.
Dynamic Host Configuration Protocol (DHCP). DHCP is a protocol that workstations use to request TCP/IP configuration parameter settings from a server.
Simple Network Management Protocol (SNMP). SNMP is a network management protocol used by network administrators to gather information about various network components. Remote programs—called agents—gather information and transmit it to a central network management console using SNMP messages.
Telnet. Telnet is a command-line terminal emulation program that enables a user to log in to a remote computer on the network and execute commands there.

SPX and NCP

SPX

SPX is NetWare's connection-oriented protocol. It provides many of the same services as TCP, including packet acknowledgment and flow control. Compared to TCP, however, SPX is rarely used. NetWare servers use SPX for communication between print queues, print servers and printers; and for specialized applications that require it's services, such as RCONSOLE.

The functions of the SPX message fields are as follows:
Connection Control (1 byte). This field contains a code that identifies the message as performing a certain control function, such as End Of Message or Acknowledgment Required.
Datastream Type (1 byte). This field identifies the type of information found in the Data field or contains a code used during the connection termination sequence.
Source Connection ID (2 bytes). This field contains the number used by the transmitting system to identify the current connection.
Destination Connection ID (2 bytes). This field contains the number used by the receiving system to identify the current connection.
Sequence Number (2 bytes). This field specifies the location of this message in the sequence.
Acknowledgment Number (2 bytes). This field contains the Sequence Number value that the system expects to find in the next packet it receives, thus acknowledging the successful receipt of all of the previous packets.
Allocation Number (2 bytes). This field, used for flow control (that is, the interactive regulation of the data transmission speed), specifies the number of packet receive buffers that are available on the transmitting system.
Data (variable). This field contains the information generated by an application or upper
layer protocol.


NCP

NCP is responsible for all of the file-sharing traffic generated by Novell NetWare clients and servers, and it also has a number of other functions. As a result, NCP is far more commonly used than SPX. The wide variety of network functions that use NCP make it difficult to pinpoint the protocol's place in the OSI reference model. File transfers between clients and servers place the protocol firmly in the transport layer, but NetWare clients also use NCP messages to log in to the Novell Directory Services (NDS) tree, which is a session layer function. In addition, there are other presentation and application layer services that NCP provides. However, for all of these services, NCP messages are carried within IPX datagrams, which affirms its dominant presence at the transport layer. Unlike SPX and the TCP/IP transport layer protocols, NCP uses different formats for client request and server reply messages. In addition, there is another form of NCP message called the NetWare Core Packet Burst (NCPB) protocol, which enables systems to transmit multiple messages with only a single acknowledgment. NCPB was developed relatively recently to address a shortcoming of NCP, which requires an individual acknowledgment message for each data packet.

The NCP Request message fields perform the following functions:


Request Type (2 bytes). This field specifies the basic type of request performed by the message, using codes that represent the following functions: Create a Service Connection, File Server Request, Connection Destroy, and Burst Mode Protocol Packet.
Sequence Number (1 byte). This field contains a value that indicates this message's place in the current NCP sequence.
Connection Number Low (1 byte). This field contains the number of the client's connection to the NetWare server.
Task Number (1 byte). This field contains a unique value that the connected systems use to associate requests with replies.
Connection Number High (1 byte). This field is unused.
Function (1 byte). This field specifies the exact function of the message.
Subfunction (1 byte). This field further describes the function of the message.
Subfunction Length (2 bytes). This field specifies the length of the Data field.
Data (variable). This field contains information that the server will need to process the request, such as a file location.


The functions of the NCP Reply message fields are as follows:

Reply/Response Type (2 bytes). This field specifies the type of reply in the message, using codes that represent the following functions: File Server Reply, Burst Mode Protocol and
Positive Acknowledgment.
Sequence Number (1 byte). This field contains a value that indicates this message's place in the current NCP sequence.
Connection Number Low (1 byte). This field contains the number of the client's connection to the NetWare server.
Task Number (1 byte). This field contains a unique value that the connected systems use to associate requests with replies.
Connection Number High (1 byte). This field is unused.
Completion Code (1 byte). This field indicates whether or not the request associated with this reply has been successfully completed.
Connection Status (1 byte). This field indicates whether the connection between the client and the server is still active.
Data (variable). This field contains information sent by the server in response to the request.

TCP and UDP

The TCP/IP protocol suite gets its name from the combination of the TCP and IP protocols, which together provide the service that accounts for the majority of traffic on a TCP/IP network. Internet applications such as Web browsers, File Transfer Protocol (FTP) clients and e-mail readers all depend on the TCP protocol to retrieve, without error, large amounts of data from servers. TCP is defined in Request For Comments (RFC) 793, published in 1981 by the Internet Engineering Task Force (IETF).

TCP Header
Transport layer protocols encapsulate data that they receive from the application layer protocols operating above them by applying a header, just as the protocols at the lower layers do. In many cases, the application layer protocol passes more data to TCP than can fit into a single packet, so TCP splits the data into smaller pieces. Each piece is called a segment, and the segments that comprise a single transaction are known collectively as a sequence. Each segment receives its own TCP header and is passed down to the network layer for transmission in a separate datagram. When all of the segments arrive at the destination, the receiving computer reassembles them into the original sequence.

Functions of the TCP message fields
Source Port (2 bytes): identifies the process on the transmitting system that generated the information carried in the Data field;
Destination Port (2 bytes): identifies the process on the receiving system for which the information in the Data field is intended;
Sequence Number (4 bytes): identifies the location of the data in this segment in relation to the entire sequence;
Acknowledgment Number (4 bytes): in acknowledgment (ACK) messages, this field specifies the sequence number of the next segment expected by the receiving system;
Data Offset (4 bits): specifies the number of 4-byte words in the TCP header;
Reserved (6 bits): unused;
Control Bits (6 bits): contains 6 flag bits that identify the functions of the message;
Window (2 bytes): specifies how many bytes the computer is capable of accepting from the connected system;
Checksum (2 bytes): contains the results of a cyclical redundancy check (CRC) computation performed by the transmitting system and is used by the receiving system to detect errors in the TCP header, data and parts of the IP header;
Urgent Pointer (2 bytes): when the urgent (URG) control bit is present, this field indicates which part of the data in the segment the receiver should treat as urgent;
Options (variable): this field may contain information related to optional TCP connection configuration features;
Data (variable): this field may contain one segment of an information sequence generated by an application layer protocol.


Ports and Sockets
As with data-link and network layer protocols, one of the important functions of a transport layer protocol is to identify the protocol or process that generated the data it carries. Both TCP and UDP do this by specifying the number of a port that has been assigned to a particular process by the Internet Assigned Numbers Authority (IANA). These port numbers are published in RFC 1700 - the 'Assigned Numbers RFC - and a list of the most common ports is included with every TCP/IP client in a text file called 'SERVICES'. When a TCP/IP packet arrives at its destination, the transport layer protocol receiving the IP datagram reads the value in the Destination Port field and delivers the information in the Data field to the program or protocol associated with that port. All of the common Internet applications have particular port numbers associated with them, called well-known ports. For example, Web servers use port 80, and Domain Name System (DNS) servers use port 53. TCP and UDP both maintain their own separate lists of well-known port numbers. FTP uses TCP ports 20 and 21. Because FTP uses only TCP (and not UDP) at the transport layer, it is also possible for a different application layer protocol to use the same ports (20 and 21) with the UDP protocol. However, in some cases, a protocol can use either transport layer protocol. DNS, for example, is associated with both TCP port 53 and UDP port 53.


Service Name Port Number Protocol Function:
ftp-data 20 TCP FTP data channel transmitting files between systems;
ftp 21 TCP FTP control channel exchanging commands and responses by connected systems;
telnet 23 TCP Telnet execute commands on network-connected systems;
smtp 25 TCP Simple Mail Transport Protocol send e-mail messages;
domain 53 TCP and UDP DNS used to receive host name resolution requests from clients;
bootps 67 TCP and UDP Bootstrap Protocol and DHCP servers TCP/IP configuration requests;
bootpc 68 TCP and UDP BOOTP and DHCP clients send TCP/IP configuration requests;

http 80 TCP HTTP used by Web servers to receive requests from client browsers;
pop3 110 TCP Post Office Protocol 3 (POP3) receive e-mail requests from clients;
snmp 161 TCP and UDP Simple Network Management Protocol (SNMP) used by SNMP agents to transmit status information to a network management console.


When one TCP/IP system addresses traffic to another, it uses a combination of an IP address and a port number. The combination of an IP address and a port is called a socket. To specify a socket in a Uniform Resource Locator (URL), you enter the IP address first and then follow it with a colon and then the port number. The socket 192.168.2.10:21, for example, addresses port 21 on the system with the address 192.168.2.10. Because the port number for the FTP control port is 21, this socket addresses the FTP server running on that computer. You usually don't have to specify the port number when you're typing a URL because the program you use assumes that you want to connect to the well-known port. Your browser, for example, addresses all the URLs you enter to port 80, the HTTP web server port, unless you specify otherwise. The IANA port numbers are recommendations, not ironclad rules, however. You can configure a web server to use a port number other than 80, and in fact, many Web servers assign alternate ports to their administrative controls, so that only users who know the correct port number can access them. The well-known ports published in the 'Assigned Numbers' RFC mostly refer to servers. Because it is the client that usually initiates communication with the server and not the other way around, clients don't need permanently assigned port numbers. Instead, a client program typically selects a port number at random, called an ephemeral port number, to use while communicating with a particular server. The IANA only controls port numbers from 1 to 1023, so ephemeral port numbers always have values higher than 1024. A server receiving a packet from a client uses the value in the TCP header's Source Port field to address its reply to the correct ephemeral port in the client system.

Control Bits
The Control Bits field of the TCP header contains six flags that signify particular message functions. In most cases, systems activate the various flags to make a TCP message perform a control function; for example, to participate in the connection establishment process or to acknowledge the proper receipt of a data segment. The functions of the six flags are as follows:
URG. This flag indicates that the segment contains urgent data. When this flag is present, the receiving system reads the contents of the Urgent Pointer field to determine which part of the Data field contains the urgent information.
ACK. This flag indicates that the message is an acknowledgment of a previously transmitted segment. When this flag is present, the system receiving the message reads the contents of the Acknowledgment Number field to determine what part of the sequence it should transmit next.
PSH. This flag indicates that the receiving system should forward the data it has received in the current sequence to the process identified in the Destination Port field immediately, rather than wait for the rest of the sequence to arrive.
RST. This flag causes the receiving system to reset the TCP connection and discard all of the segments of the sequence it has received thus far.
SYN. This flag is used to synchronize the systems' respective Sequence Number values during the establishment of a TCP connection.
FIN. This flag is used to terminate a TCP connection.


Establishing a connection
TCP is a connection-oriented protocol, which means that before two systems can exchange application layer data, they must first establish a connection. This connection ensures that both computers are present, operating properly, and ready to receive data. The TCP connection remains alive during the entire exchange of data, after which the systems close it in an orderly manner. In most cases, a TCP connection exists for the duration of a single file transmission. For example, when a Web browser connects to a server on the Internet, it first establishes a connection with the server, then transmits an HTTP request message containing a URL and finally receives the file specified in the request from the server. Once the file is transferred, the systems terminate the connection. As the browser processes the downloaded file, it may detect links to graphic images, audio clips, or other files needed to display the Web page. The browser then establishes a separate connection to the server for each of the linked files, retrieves them and displays them as part of the downloaded page. Thus, a single Web page may require the browser to create dozens of separate TCP connections to the server to download the individual files. The TCP connection establishment process is known as a three-way handshake. The process consists of an exchange of three messages, none of which contain any application layer data. The purpose of these messages, apart from ascertaining that the other computer actually exists and is ready to receive data, is to exchange the sequence numbers that the computers will use to number the messages they will transmit. At the start of the connection establishment process, each computer selects an initial sequence number (ISN) for the first TCP message it transmits. The systems then increment the sequence numbers for each subsequent message. The computers select an ISN using an incrementing algorithm that makes it highly unlikely for connections between the same two sockets to use identical sequence numbers at the same time. Each system maintains its own sequence numbers and, during the handshake, each informs the other of the numbers it will be using.

The messages that contain the ISN for each system have the SYN flag set in the Control Bits field. In a typical TCP transaction, a client system generates it's SYN message, with it's ISN in the Sequence Number field. The server, on receiving this message, generates a response that performs two functions. First, the ACK flag is set, so that the message functions as an acknowledgment of the client's SYN message. Second, the server's response also has the SYN flag set and includes its own ISN in the Sequence Number field. When the client system receives the server's SYN message, it generates a response of its own, which contains the ACK flag. Once the server receives the client's acknowledgment, the connection is established and the systems are ready to exchange messages containing application data. Thus, a TCP connection is actually two separate connections running in opposite directions. TCP is therefore known as a full-duplex protocol, because the systems establish each connection separately and later terminate each
one separately.


Another function of the SYN messages generated by the two computers during the three-way handshake is for each system to inform the other of its maximum segment size (MSS). Each system uses the other system's MSS to determine how much data it should include in its upcoming messages. The MSS value for each system depends on which data-link layer protocol is used by the network on which each system resides. The MSS is included as an option in the two SYN packets. This option takes the form of 4 additional bytes in the TCP header's Options field, using the following subfields:

Kind (1 byte). This subfield specifies the option type. The MSS option uses a value of 2.
Length (1 byte). This subfield specifies the length of the option in bytes. For MSS, the value is 4.
Maximum Segment Size (2 bytes). This subfield specifies the MSS for the system in bytes.


Transmitting Data
After the connection has been established, each computer has all of the information it needs for TCP to begin transmitting application data, as follows:

Port number. The client is already aware of the well-known port number for the server, which it needed to initiate the connection. The messages from the client to the server contain the ephemeral port number (in the Source Port field) that the server must use in it's replies.
Sequence number. Each system uses the other system's sequence numbers in the Acknowledgment Number field of its own messages.
MSS. Using the information in the MSS option, the systems know how large to make the segments of each sequence.


Whether the client or the server transmits its data first depends on the nature of the application. A transaction between a Web browser client and a Web server begins with the client sending a particular URL to a server, typically requesting a site's home page. Other client/server transactions may begin with the server sending data to the client.

Acknowledging Packets
The Sequence Number and Acknowledgment Number fields are the key to TCP's packet acknowledgment and error correction systems. During the handshake, when the server replies to the client's SYN message, the SYN/ACK message that the server generates contains it's own ISN in the Sequence Number field and it also contains a value in its Acknowledgment Number field. This Acknowledgment Number value is the equivalent of the client's ISN plus one. The function of this field is to inform the other system what value is expected in the next message's Sequence Number field, so if the client's ISN is 1000000, the server's SYN/ACK message contains the value 1000001 in its Acknowledgment Number field. When the client sends its first data message to the server, that message will have the value 1000001 in its Sequence Number field, which is what the server expects.

When the systems begin to send data, they increment their Sequence Number values for each byte of data they transmit. When a web browser sends its URL request to a web server, for example, its Sequence Number value is its ISN plus one (1000001), as expected by the server. Assuming that the actual file or Web page requested by the client is 500 bytes (not including the IP or TCP headers), the server will respond to the request message with an ACK message that contains the value 1000501 in its Acknowledgment Number field. This indicates that the server has received 500 bytes of data successfully and is expecting the client's next data packet to have the Sequence Number 1000501. Because the client has transmitted 500 bytes to the server, it increments its Sequence Number value by that amount and the next data message it sends will use the value that the server expects (assuming there are no transmission errors). The same message numbering process also occurs simultaneously in the other direction. The server has transmitted no data yet, except for its SYN/ACK message, so the ACK generated by the client during the handshake contains the server's ISN plus one. The server's acknowledgment of the client's request contained no data, so the Sequence Number field was not incremented. Thus, when the server responds to the client's URL request, it's first data message will use the same ISN-plus-one value in its Sequence Number field, which is what the client expects.

In the case described here, the client's URL request is small and requires only one TCP message, but in most cases, the web server responds by transmitting a web page, which is likely to require a sequence of TCP messages consisting of multiple segments. The server divides the web page (which becomes the sequence it is transmitting) into segments no larger than the client's MSS value. As the server begins to transmit the segments, it increments its Sequence Number value according to the amount of data in each message. If the server's ISN is 20000, the Sequence Number of its first data message will be 20001. Assuming that the client's MSS is 1000, the server's second data message will have a Sequence Number of 21001, the third will be 22001, and so on. Once the client begins receiving data from the server, it is responsible for acknowledging the data. TCP uses a system called delayed acknowledgments, which means that the systems do not have to generate a separate acknowledgment message for every data message they receive. The intervals at which the systems generate their acknowledgments is left up to the individual TCP implementation. Each acknowledgment message that the client sends in response to the server's data messages has the ACK flag, of course, and the value of its Acknowledgment Number field reflects the number of bytes in the sequence that the client has successfully received. If the client receives messages that fail the CRC check or fails to receive messages containing some of the segments in the sequence, it signals these failures to the server using the Acknowledgment Number field in the ACK messages. The Acknowledgment Number value always reflects the number of bytes from the beginning of the sequence that the destination system has received correctly. If, for example, a sequence consists of 10 segments, and all are received correctly except the seventh segment, the recipient's acknowledgment message will contain an Acknowledgment Number value that reflects the number of bytes in the first six segments only. Segments 8 through 10, even though they were received correctly, are discarded and must be retransmitted along with segment 7. This system is called positive acknowledgment with retransmission because the destination system only acknowledges the messages that were sent correctly. A protocol that uses negative acknowledgement would assume that all messages have been received correctly except for those that the destination system explicitly lists as having errors. The source system maintains a queue of the messages that it has transmitted and deletes those messages for which acknowledgments have arrived. Messages that remain in the source system's queue for a predetermined period of time are assumed to be lost or discarded, and the system automatically retransmits them. Once the server has transmitted all of the segments in the sequence that contains the requested web page and the client acknowledges that it has received all of the segments correctly, the systems terminate the connection. If the segments have arrived at their destination out of sequence, the receiving system uses the Sequence Number values to reassemble them into the proper order. The client system then processes the data it has received to display the Web page. In all likelihood, the page will contain links to images or other elements and the client will have to initiate additional connections to the server to download more data. This is the nature of the Web client/server process. However, other types of applications might maintain a single TCP connection for a much longer period of time and perform repeated exchanges of data in both directions. In a case like this, both systems can exchange data messages and acknowledgments, with the error detection and correction processes occurring on both.

Detecting Errors
There are basically two things that can go wrong during a TCP transaction: Messages can arrive in a corrupted state, or they can fail to arrive at all. When messages fail to arrive, the lack of acknowledgments from the destination system causes the sender to retransmit the missing messages. If a serious network problem arises that prevents the two systems from exchanging any messages, the TCP connection eventually times out and the entire process must start again.
When messages do arrive at their destination, the receiving system checks them for accuracy by performing the same checksum computation that the sender performed before transmitting the data and comparing the results to the value in the Checksum field. If the values don't match, the system discards the message. This is a crucial element of the TCP protocol, because it is the only end-to-end checksum performed on the actual application layer data. IP includes an end-to-end checksum, but only on its header data and data-link layer protocols like Ethernet and Token Ring contain a checksum, but only for one hop at a time. If the packets pass through a network that doesn't provide a checksum, such as a Point-to-Point Protocol (PPP) link, there is a potential for errors that can't be detected at the data-link or network layers to be introduced.
The checksum performed by TCP is unusual because it is calculated not only on the entire TCP header and the application data, but also on a pseudo-header. The pseudo-header consists of the IP header's Source IP Address, Destination IP Address, Protocol, and Length fields, plus 1 byte of padding, to bring the total number of bytes to an even 12 (three 4-byte words).


Flow Control
Flow control is the process by which the destination system in a TCP connection provides information to the source system that enables that source system to regulate the speed at which it transmits data. Each system has a limited amount of buffer space in which to store incoming data. The data remains in the buffer until the system generates messages acknowledging that data. If the system transmitting the data sends too much information too quickly, the receiver's buffers could fill up, forcing it to discard data messages. The system receiving the data uses the Window field in its acknowledgment messages to inform the sender of how much buffer space it has available at that time. The transmitting system uses the Window value along with the Acknowledgment Number value to determine what data in the sequence the system is permitted to transmit. For example, if an acknowledgment message contains an Acknowledgment Number value of 150000 and a Window value of 500, the sending system knows that all of the data in the sequence through byte 150000 has been received correctly at the destination, and that it can now transmit bytes 150001 through 150500. If, by the time the sender transmits those 500 bytes, it has received no additional acknowledgments, it must stop transmitting until the next acknowledgment arrives. This type of flow control is called a sliding window technique.

Terminating the connection
Once the systems involved in a TCP connection have finished their exchange of data, they terminate the connection using control messages, much like those used in the three-way handshake that established the connection. As with the establishment of the connection, which system initiates the termination sequence depends on the application generating the data. In the case of the web client/server transaction used as an example in this lesson, the server begins the termination process by setting the FIN flag in the Control Bits field of its last data message. In other cases, the system initiating the termination process might use a separate message containing the FIN flag and no data. The system receiving the FIN flag transmits an acknowledgment message and then generates its own message containing a FIN flag, to which the other system must respond with an ACK message. This is necessary because, as shown in the establishment process, the connection runs in both directions, and it is necessary for both systems to terminate their respective connections using a total of four messages. Unlike the connection establishment procedure, the computers can't combine the FIN and ACK flags in the same message, which is why four messages are needed instead of three. There are some occasions when only one of the two connections is terminated and the other is left open. This is called a half close.

UDP
UDP is defined in RFC 768, 'User Datagram Protocol'. Unlike TCP, UDP is a connectionless protocol that provides no packet acknowledgment, flow control, segmentation or guaranteed delivery. As a result, UDP is far simpler than TCP and generates far less overhead. Not only is the UDP header much smaller than that of TCP - 8 bytes as opposed to 20 bytes or more - there are no separate control messages, such as those used to establish and terminate connections. UDP transactions typically consist of only two messages - a request and a reply - with the reply functioning as a tacit acknowledgment. For this reason, most of the applications that use UDP must transport only amounts of data small enough to fit into a single message. DNS and DHCP are two of the most common application layer protocols that use UDP. There are some applications that use UDP to transmit large amounts of data, such as streaming audio and video, but UDP is appropriate for these purposes because this type of data can survive the loss of an occasional packet, whereas a program or data file cannot.

The functions of the UDP message fields are as follows:

Source Port (2 bytes). This field identifies the process on the transmitting system that generated the information carried in the Data field.
Destination Port (2 bytes). This field identifies the process on the receiving system for which the information in the Data field is intended.
Length (2 bytes). This field specifies the length of the UDP header and data in bytes.
Checksum (2 bytes). This field contains the results of a CRC computation performed by the transmitting system and is used by the receiving system to detect errors in the UDP header, data, and parts of the IP header.
Data (variable). This field contains the information generated by the application layer process specified in the Source Port field.


The Source Port and Destination Port fields in a UDP header perform the same function as they do in the TCP header. The Length field specifies how much data is included in the UDP message, and the Checksum value is computed using the message header, data, and the IP pseudo-header, just as in TCP. The UDP standard specifies that the use of the checksum is optional. The transmitting system fills the Checksum field with zeroes if it is unused. There has been a great deal of debate about whether UDP messages should include checksums. RFC 768 requires all UDP systems to be capable of checking for errors using checksums and most current implementations include the checksum computations.

August 31, 2005

AppleTalk

AppleTalk originally used its own data-link layer protocol, called Apple LocalTalk, the adapter for which was built into the Macintosh computer. LocalTalk runs at only 230 Kbps, however, and it has come to be replaced by Apple EtherTalk at 10 Mbps (or Fast EtherTalk at 100 Mbps) and, to a lesser extent, TokenTalk at 4 or 16 Mbps and FDDITalk at 100 Mbps, which are adaptations of the Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) protocols.
Like IP and IPX, AppleTalk uses a hierarchical addressing system to identify the computers on a network. Every AppleTalk computer has a unique 8-bit node ID that it randomly selects and assigns to itself as it connects to the network. After transmitting a broadcast message to make sure that no other computer is using the same ID, the system stores the address for future use each time it reconnects. Because the number is only 8 bits long, a single AppleTalk network can have no more than 254 nodes (28 – 2, because 0 and 255 are not used for node IDs). AppleTalk also uses 16-bit network numbers to identify the LANs in an internetwork for routing purposes. A computer connecting to the network obtains the network number value for the LAN using the Zone Information Protocol (ZIP). As with IP, AppleTalk networks can be connected together with routers that read the destination network numbers and node IDs in each packet and forward them to the appropriate LAN.
To identify specific processes running on a computer, AppleTalk uses an 8-bit socket number, which performs the same function as the Protocol field in the IP header. The combination of network number, node ID, and socket is expressed as three decimal numbers separated by periods, as in 2.12.50, meaning network 2, node 12, and socket 50. AppleTalk reconciles the data-link hardware addresses coded into network interface adapters with the node IDs and network numbers using the AppleTalk Address Resolution Protocol (AARP), which functions remarkably like the TCP/IP Address Resolution Protocol (ARP).
In addition to the node IDs and network numbers, AppleTalk computers have friendly names that make it easier to locate specific resources on the network. Computers have their own names and groups of computers are gathered into units called zones. A zone is a logical grouping that makes it easier to locate specific resources on the network.
At the network layer, AppleTalk uses the DDP. Like IP and IPX, DDP is a connectionless protocol that encapsulates data generated by an upper layer protocol and provides many of the same services as IP and IPX, including packet addressing, routing, and protocol identification. A simple AppleTalk network that consists of only one network number and one zone is called a nonextended network. A network that consists of multiple network numbers and zones is called an extended network, and uses the long-format DDP header.


Functions of the DDP header fields

Hop Count (1 byte): specifies the number of routers that have processed the packet on the way to its destination;
Datagram Length (2 bytes): specifies the length of the DDP datagram, used for basic
error detection;
Checksum (2 bytes): optional field containing a checksum computed on the entire datagram, used for more extensive error detection;
Source Socket Number (1 byte): specifies the socket number of the application or process that generated the information in the data field;
Destination Socket Number (1 byte): specifies the socket number of the application or process to which the information in the data field is to be delivered;
Source Address (3 bytes): specifies the network number and node ID of the computer generating the packet;
Destination Address (3 bytes): specifies the network number and node ID of the computer that is to receive the packet;
DDP Type (1 byte): identifies the upper layer protocol that generated the information carried in the data field;
Data (variable, up to 586 bytes): contains information generated by an upper layer protocol.


On a nonextended network, DDP uses the short-format header, which includes only the four source and destination fields, plus the datagram length and DDP type.

NetBEUI

NetBEUI differs substantially from IP and IPX. The primary difference is that NetBEUI does not route packets between networks, so the protocol is not suitable for use on large internetworks. Microsoft adopted NetBEUI for use with Windows at a time when the company was first adding networking capabilities to its operating systems. Like NetWare, the initial market was for small LANs, and it is in this environment that NetBEUI excels. For a small stand-alone network, NetBEUI provides excellent performance, and it is self-adjusting and self-configuring. There's no need to supply the client with an address and other configuration parameters, as with TCP/IP. NetBEUI, however, does not support Internet communications; this requires TCP/IP.

NetBIOS naming

NetBIOS is a programming interface that applications use to communicate with the networking hardware in the computer and, through that, with the network. NetBIOS includes its own namespace, which NetBEUI uses to identify computers on the network, just as IP uses its own IP addresses and IPX uses hardware addresses. The computer name that you assign to a system during Windows installation is, in reality, a NetBIOS name, which must be unique on the network. A NetBIOS name is 16 characters long. Windows reserves the last character for a code that identifies the type of resource using the name, leaving 15 user-assigned alphanumeric characters. Different codes can identify NetBIOS names as representing computers, domain controllers, users, groups, and other resources. If you assign a name of fewer than 15 characters to a computer, the system pads it out to 15, so that the identification code always falls on the final character.

NetBIOS names are stored in a flat-file database; there is no hierarchy among the names. IP and IPX both use a hierarchical system of addressing in which one value identifies the computer and another value identifies the network on which the computer is located. NetBIOS names have no network identifier, which is why NetBEUI is not routable; it has no means of addressing packets to specific networks or maintaining routing tables containing information about networks. NetBEUI deals solely with computer identifiers, which means that all of the computers must be accessible from the one network.

NetBEUI frame

The NetBEUI Frame (NBF) protocol is a multipurpose protocol that Windows-based computers use for a variety of purposes, including the registration and resolution of NetBIOS names, the establishment of sessions between computers on the network and the transport of file and print data using Windows' Server Message Blocks (SMB) protocol. All of these functions use a single frame format.

Functions of the NBF fields

Length (2 bytes): specifies the length of the NBF header (in bytes);
Delimiter (2 bytes): signals the receiving system that the message should be delivered to the NetBIOS interface;

Command (1 byte): identifies the function of the NBF message;
Data1 (1 byte): used to carry optional data specific to the message type specified by the Command field;
Data2 (2 bytes): used to carry optional data specific to the message type specified by the Command field;
Transmit Correlator (2 bytes): contains a value that the receiving system will duplicate in the same field of its reply messages, enabling the sending system to associate the requests
and replies;
Response Correlator (2 bytes): contains the value that the sending system expects to receive in the Transmit Correlator field of the reply to this message;
Destination Name (16 bytes): contains the NetBIOS name of the system that will receive
the packet;
Source Name (16 bytes): contains the NetBIOS name of the system sending the packet.
Destination Number (1 byte): contains the number assigned to the session by the
destination system;
Source Number (1 byte): contains the number assigned to the session by the source system;
Optional (variable): contains the actual data payload of the packet.


The value in the Command field dictates the type of message contained in the packet, using the following values:

00 Add Group Name Query
01 Add Name Query
02 Name In Conflict
03 Status Query
07 Terminate Trace (remote)
08 Datagram
09 Datagram Broadcast
0A Name Query
0D Add Name Response
0E Name Recognized
0F Status Response
13 Terminate Trace (local and remote)
14 Data Ack
15 Data First Middle
16 Data Only Last
17 Session Confirm
18 Session End
19 Session Initialize
1A No Receive
1B Receive Outstanding
1C Receive Continue
1F Session Alive


There are four separate protocols that make use of the NetBEUI Frame: the Name Management Protocol (NMP), the Session Management Protocol (SMP), the User Datagram Protocol (UDP), and the Diagnostic and Monitoring Protocol (DMP).

Name Management Protocol (NMP) is the protocol that systems use to register and resolve NetBIOS names on the network. When a system first starts up, it generates an Add Name Query message containing its NetBIOS name and transmits it to the other NetBIOS systems on the network. The function of this message is to ensure that no other system is using that same name. If there is a duplication, the system already using the name must reply with an Add Name Response message, and the querying system displays an error message. If the system receives no response, the name is registered to that system.
Name resolution is the process of converting a NetBIOS name into the hardware address needed for a system to transmit data-link layer frames to it. When a NetBEUI system has data to transmit to a particular system or wants to establish a session with another system, it begins by generating a Name Query message containing the name of the target system in the Destination Name field and sending it to all of the NetBIOS systems on the network. All of the systems on the network with registered NetBIOS names are required to respond to Name Query messages containing their name. The system with the requested name responds by transmitting a Name Recognized message back to the sender as a unicast message. The sender, on receiving this message, extracts the hardware address of the system holding the requested name and can then transmit subsequent packets to it as unicasts.
One of the drawbacks of NetBEUI, and one of the reasons it is only suitable for relatively small networks, is the large number of broadcast packets it generates. These Name Query requests are actually transmitted to a special NetBIOS address, but on a Windows-based network, this is the functional equivalent of a broadcast. On a large network or a network with high traffic levels, systems must process a large number of these name resolution broadcasts for no reason, because they are intended for other systems.

The NBF messages used by NMP use NetBEUI's connectionless service. These messages are part of brief request and response transactions that don't require additional services like packet acknowledgment. For more extensive data transfers, however, a connection-oriented, reliable service is required, and to do this, the two communicating systems must first create a session between them. The systems use NBF's Session Management Protocol (SMP) messages to establish a session, transmit data, and then break down the session afterward.
The session establishment begins with a standard name resolution exchange, followed by the establishment of a session at the Logical Link Control (LLC) layer. Then the client system initiating the session transmits a Session Initialize message to the server system, which responds with a Session Confirm message. At this point, the session is established, and the systems can begin to transmit application data using Data First Middle and Data Only Last messages, which may contain data generated by other protocols, such as SMB. The system receiving the data replies with Receive Continue or Data Ack messages that serve as acknowledgments of successful transmissions.
During the session, when no activity is taking place, the systems transmit periodic Session Alive messages, which prevent the session from timing out. When the exchange of data packets is completed, the client generates a Session End message, which terminates the session.

To exchange small amounts of data, systems can also use the same connectionless service as NMP. This is sometimes referred to as the User Datagram Protocol (UDP), but it is important not to confuse this protocol with the TCP/IP transport layer protocol of the same name. The UDP is the simplest of the NBF protocols, consisting of only two message types, the Datagram message and the Datagram Broadcast message. Systems can transmit various kinds of information using these messages, including SMB data.

NetBEUI systems use the Diagnostic and Monitoring Protocol (DMP) to gather status information about systems on the network. A NetBEUI system generates a Status Query message and transmits it to all of the NetBIOS systems on the network. The systems reply with Status Response messages containing the requested information.

IPX

IPX is based on a protocol called Internetwork Datagram Packet (IDP), which was designed for an early networking system called Xerox Network System (XNS). IPX is a connectionless protocol that is similar to IP in that it functions at the network layer of the OSI reference model and carries the data generated by several other protocols across the network. However, IPX and the other protocols in the IPX suite are designed for use on LANs only, whereas the TCP/IP protocols were designed for what is now internet. This means that IPX does not have its own self-contained addressing system like IP, but it does perform some of the same functions as IP, such as routing traffic between different types of networks and identifying the protocol that generated the data it is carrying.

IPX Header

Like IP, IPX creates datagrams by adding a header to the data it receives from transport layer protocols. The IPX header is longer than that of IP: 30 bytes as opposed to 20.

Fields functions


Checksum (2 bytes): originally, this field was unused and always contained the hexadecimal value FFFF, because IPX relied on the transport layer protocol for error detection, today this field contains a cyclical redundancy check (CRC) value used for error detection;
Length (2 bytes): specifies the length (in bytes) of the entire datagram, including all of the header fields and the data;
Transport Control (1 byte): specifies the number of routers that the datagram has passed through on the way to its destination;
Packet Type (1 byte): specifies which protocol generated the information found in the data field;
Destination Network Address (4 bytes): identifies the network on which the destination system is located;
Destination Node Address (6 bytes): specifies the hardware address of the destination system;
Destination Socket (2 bytes): specifies the process or application on the destination system for which the datagram is intended;
Source Network Address (4 bytes): identifies the network on which the source system is located;
Source Node Address (6 bytes): specifies the hardware address of the source system;
Source Socket (2 bytes): specifies the process or application on the source system that generated the datagram;
Data (variable): contains the information generated by the protocol specified in the
Packet Type field.


The IPX header's Transport Control field is similar to the TTL field in the IP header, except that the Transport Control field starts at a value of 0 and is incremented by each router that forwards the datagram. If the value of the field reaches 16, the packet is discarded, except when using the Network Link Services Protocol (NLSP) for dynamic routing, in which case the value is configurable to use up to 127 hops. The IP TTL field, by contrast, starts at a value specified by the system generating the datagram and is decremented by each router. The difference in the functionality of these two fields is indicative of the differences between IPX and IP in general. IP has almost unlimited scalability, as demonstrated by the fact that a system can be configured with a relatively large TTL value. Windows-based systems, for example, use a default value of 128 for this field. IPX, which is designed for use on private networks, is limited to 16 hops, more than enough for most corporate networks, but not sufficient for Internet communications.

The Packet Type field uses codes to specify the protocol that generated the information stored in the datagram. There are codes for NetWare's upper layer protocols, such as the NetWare Core Protocol (NCP), as well as codes for NetWare's Routing Information Protocol (RIP) and Service Advertising Protocol (SAP). NetWare servers use RIP to exchange routing data and SAP to advertise their existence on the network.

IPX addressing

As mentioned earlier, IPX, unlike IP, does not have its own addressing system. Instead, IPX uses the same hardware addresses that data-link layer protocols use to identify the computers on the network. This is possible with NetWare because the operating system is intended for use with LAN-based computers, whereas IP has to accommodate all of the different types of computers found on the Internet. The Destination Node Address and Source Node Address fields are each 6 bytes long to hold the hardware addresses coded into the network interface adapters installed in the computers.

Another important difference between the hardware address and an IP address is that IP addresses identify both a network and a host on that network, whereas hardware addresses identify a network interface adapter only. For a router on a NetWare network to forward packets properly, it must know which network the destination system is on and this requires some means to identify particular networks. NetWare uses separate network addresses that an administrator or the installation program assigns to the networks when they install the NetWare servers. Because NetWare is designed for private LANs, there's no reason network addresses must be registered, as they are with IP. The network administrators only need to be sure to assign a unique address to each network. The network addresses are 4 bytes long, and the IPX header provides them in the Destination Network Address and Source Network Address fields. The combination of the network address and the node (or hardware) address provides a specific location for a computer on an internetwork. In addition to getting the data to the correct computer, IPX must also deliver the data to the correct process on that computer. To do this, it also includes 2-byte codes in the Destination Socket and Source Socket fields to identify the function of the datagram.

IP

The IP specification was published as RFC 791 in September 1981 and was later ratified as Internet Standard 5.

IP functions

Encapsulation: the packaging of the transport layer data into a datagram;
Addressing: the identification of systems in the network using IP addresses;
Routing: the identification of the most efficient path to the destination system;
Fragmentation: the division of data into fragments of an appropriate size for transmission;
Protocol identification: the specification of the transport layer protocol that generated the data.

IP datagram fields functions


Version (4 bits): specifies the version (4 or 6) of the IP protocol used to create the datagram;
Internet Header Length (IHL, 4 bits): specifies the length of the datagram's header, in 32-bit (4-byte) words, the typical length of a datagram header is five words (20 bytes), but if the datagram includes additional options, it can be longer, which is the reason for having this field;
Type Of Service (1 byte): contains a code that specifies the service priority for the datagram, a rarely used feature that enables a system to assign a priority to a datagram that routers observe while forwarding it through an internetwork, the values provide a trade-off among delay, throughput and reliability;
Total Length (2 bytes): specifies the length of the datagram, including that of the data field and all of the header fields in bytes;
Identification (2 bytes): contains a value that uniquely identifies the datagram, the destination system uses this value to reassemble datagrams that have been fragmented during transmission;
Flags (3 bits): contains bits used to regulate the datagram fragmentation process.
Fragment Offset (13 bits): when a datagram is fragmented, the system inserts a value in this field that identifies this fragment's place in the datagram;
Time To Live (TTL, 1 byte): specifies the number of networks that the datagram should be permitted to travel through on the way to its destination, each router that forwards the datagram reduces the value of this field by one, if the value reaches zero, the
datagram is discarded;
Protocol (1 byte): contains a code that identifies the protocol that generated the information found in the Data field;
Header Checksum (2 bytes): contains a checksum value computed on the IP header fields only (and not the contents of the Data field) for the purpose of error detection;
Source IP Address (4 bytes): specifies the IP address of the system that generated the datagram;
Destination IP Address (4 bytes): specifies the IP address of the system for which the
datagram is destined;
Options (variable): this field is present only when the datagram contains one or more of the 16 available IP options, the size and content of the field depends on the number and the nature of the options;
Data (variable): contains the information generated by the protocol specified in the protocol field, the size of the field depends on the data-link layer protocol used by the network over which the system will transmit the datagram.


IP addressing

The IP protocol is unique among network layer protocols because it has its own self-contained addressing system that it uses to identify computers on an internetwork of almost any size. Other network layer protocols (such as IPX) use the hardware addresses coded into network interface adapters to identify computers on a LAN, with a separate address for the network, whereas NetBEUI assigns a name to each computer on the LAN and has no network address. IP addresses are 32 bits long and contain both a network identifier and a host identifier. In TCP/IP parlance, the term host refers to a network interface adapter found in a computer or other device. In most cases, each computer on a network has one IP address, but it is actually the network interface adapter (generally a NIC) that the address represents. A computer with two adapters (such as a router) or one adapter and a modem connection to a network will actually have two IP addresses, one for each interface.

The IP addresses that a computer inserts into the Source IP Address and Destination IP Address fields of the IP header identify, respectively, the computer that created the packet and the one that will eventually receive it. If the packet is intended for a computer on the local network, the Destination IP Address refers to the same computer as the Destination Address in the data-link protocol header. However, if the packet's destination is a computer on another network, the Destination IP Address refers to a different computer because IP is an end-to-end protocol that deals with the entire journey of the data to its ultimate destination, not just a single network hop, as is the case with the data-link layer protocol. Data-link layer protocols cannot work with IP addresses, however, so to actually transmit the datagram, IP has to supply the data-link layer protocol with the hardware address of a system on the local network. To do this, IP uses another TCP/IP protocol, called Address Resolution Protocol (ARP). ARP works by generating broadcast messages that contain an IP address on the local network. The system using that IP address must respond to the broadcast, and the reply message contains the system's hardware address. If the datagram's destination system is on the local network, the IP protocol generates an ARP message containing the IP address of that system. If the destination system is located on another network, IP generates an ARP message containing the address of a router on the local network. Once it has received the ARP reply, the IP protocol on the original system can pass the datagram down to the data-link layer protocol and provide it with the hardware address it needs to build the frame.


IP routing

Routing is the most important and the most complex function of the IP protocol. When a TCP/IP system has to transmit data to a computer on another network, the packets must travel through the routers that connect the networks together. The source and final destination computers in a case like this are called end systems and the routers are called intermediate systems. When the packets pass through an intermediate system, they only travel up through the protocol stack as high as the network layer, where IP is responsible for deciding where to send the packet next. If the router is connected to the network where the destination system is located, it can transmit the packet there, and the packet's journey is over. If the destination system is located on another network, the router sends the packet to another router, which brings the packet one hop closer to its destination. Depending on the complexity of the internetwork, a packet might pass through dozens of routers on the way to its destination.

Because packets only reach as high as the network layer in an intermediate system, the datagrams are not opened and used. The router strips off the data-link layer frame and later builds a new one, but the datagram 'envelope' remains sealed until it reaches its destination. However, each intermediate system does make some changes to the IP header. The most important of these is the TTL field, which is set with a predetermined value by the computer that generates the packet. Each router, as it processes the packet, reduces this value by one. If the TTL value reaches zero, the router discards the packet. This mechanism prevents packets from circulating endlessly around an internetwork in the event of a routing problem.
When a router discards a packet with a TTL value of zero, it generates an error message called a Time To Live Exceeded In Transit message using the Internet Control Message Protocol (ICMP) and sends it to the system where the packet originated. This informs the system that the packet has not reached its destination. There is a utility program called traceroute included with most TCP/IP implementations that uses the TTL field to display a list of the routers that packets are using to reach a particular destination. By generating a series of packets with successively larger TTL values, each router in turn generates an ICMP error message identifying the router that discarded the packet. Traceroute assembles the router addresses from the error messages and displays the entire route to the destination.


IP fragmentation

Routers can connect networks that use different media types and different data-link layer protocols, but to forward packets from one network to another, routers must often repackage the datagrams into different data-link layer frames. In some cases, this is simply a matter of stripping off the old frame and adding a new one, but at other times the data-link layer protocols are different enough to require more extensive repackaging. For example, when a router connects a token ring network to an ethernet network, datagrams arriving from the token ring network can be up to 4500 bytes long, whereas the datagrams in ethernet packets can only be as large as 1500 bytes. To overcome this problem, the router splits the datagram arriving from the token ring network into multiple fragments. Each fragment has it's own IP header and is transmitted in a separate data-link layer frame. The size of each fragment is based on the Maximum Transmission Unit (MTU) size for the outgoing network. If they encounter a network with an even smaller MTU, fragments can themselves be split into smaller fragments. Once fragmented, the individual parts of a datagram are not reassembled until they reach the end system, which is their final destination.

When it fragments a datagram, IP attaches an IP header to each fragment. The Identification field in each fragment's header contains the same value as the datagram's original header, which enables the destination system to associate the fragments of a particular datagram. The router modifies the value of the Total Length fields to reflect the length of each fragment, and it also changes the value of the More Fragments bit in the Flags field from 0 to 1 in all of the fragments except the last one. The value of 1 in this bit indicates that there are more fragments coming for that datagram. The destination system uses this bit to determine when it has received all of the fragments and can begin to assemble them back into the whole datagram. The Fragment Offset field contains a value that specifies each fragment's place in the datagram. The first fragment has a value of 0 in this field and the value in the second fragment is the size (in bytes) of the first fragment. The third fragment's offset value is the size of the first two fragments and so forth. The destination system uses these values to reassemble the fragments in the proper order. Another bit in the Flags field, called the Don't Fragment bit, instructs routers to discard a datagram rather than fragment it. The router returns an ICMP error message to the source system when it discards a packet for this reason.


Protocol identification

For the destination system to process the incoming datagram properly, it must know which protocol generated the information carried in the Data field. The Protocol field in the IP header provides this information, using codes that are defined in RFC 1700, 'Assigned Numbers', which contains lists of the many codes used by the TCP/IP protocols. Assigned Numbers contains dozens of protocol codes, most of which are for obsolete or seldom-used protocols.

Common values for the Protocol field

0 IP
1 ICMP
3 Gateway-to-Gateway Protocol (GGP)
6 TCP
8 Exterior Gateway Protocol (EGP)
17 UDP

The protocols that you most expect to see in the list are TCP and UDP, which are the transport layer protocols that account for much of the IP traffic on a TCP/IP network. However, IP also carries other types of information in its datagrams, including ICMP messages, which notify systems of errors and other network conditions and messages generated by routing protocols like GGP and EGP, which TCP/IP systems use to automatically update their routing tables.


IP options

IP options are additional header fields that enable datagrams to carry extra information and, in some cases, accumulate information as they travel through an internetwork on the way to
their destinations.


Options defined in the IP standard

Loose Source Route: contains a list of router addresses that the datagram must use as it travels through the internetwork, the datagram can use other routers in addition to those listed;
Strict Source Route: contains a complete list of the router addresses that the datagram must use as it travels through the internetwork, the datagram cannot use any routers other than
those listed;
Record Route: provides an area in which routers can add their IP addresses when they process the datagram;
Timestamp: provides an area in which routers can add timestamps indicating when they processed the datagram. The source system can supply a list of router addresses that are to add timestamps or the routers can be allowed to add their own IP addresses along with
the timestamps.

August 30, 2005

Wireless networking

IEEE 802.11 physical layer

Wireless LANs support two topologies: an ad hoc topology and an infrastructure topology. The ad hoc or independent topology is one in which computers equipped with wireless network interface adapters communicate directly with each other on a peer-to-peer basis; there is no cabled network involved. This type of network is designed to support only a limited number of computers, such as those in a home or small business. The infrastructure topology is designed to extend the range and flexibility of a normal cabled network by enabling wireless-equipped computers to connect to it using a specialized module called an access point. In some cases, an access point is a computer with a wireless network interface adapter as well as a standard adapter connecting it to a standard cabled LAN, or it can be a dedicated device. The wireless clients communicate with the cabled network using the access point as an intermediary. The access point is essentially a translation bridge because it converts between the wireless network signals and those of the cabled network, preserving the single broadcast domain. As with all wireless communication technologies, distance and environmental conditions can have great effects on the performance realized by the mobile workstations. A single access point can typically support 10 to 20 clients, depending on how heavily they use the LAN, as long as they remain within an approximately 100- to 200-foot radius of the access point. Intervening walls and interference can diminish this performance substantially. To extend the range of the wireless part of the network and provide support for more clients, you can use multiple access points in different locations, or you can use an extension point. An extension point is essentially a wireless signal repeater that functions as a way station between wireless clients and an access point. An IEEE 802.11 LAN is divided into cells, each of which is controlled by a base station. The 802.11 standard refers to each cell as a basic service set (BSS) and to each base station as an access point. If the network uses multiple access points, they are connected by a backbone, which the standard calls a distribution system (DS). The DS is usually a cabled network, but it can conceivably be wireless as well.

The IEEE 802.11 standard supports three different types of signals at the physical layer.
Direct Sequence Spread Spectrum (DSSS) is a radio transmission method in which the outgoing signals are modulated using a digital code (called a chipping code) that uses a redundant bit pattern. The end result is that each bit of data is converted into multiple bits, enabling the signal to be spread out over a wider frequency band. The use of DSSS in combination with a technique called complimentary code keying (CKK) enables IEEE 802.11b systems to achieve their 11 Mbps transmission rates.
Frequency Hopping Spread Spectrum (FHSS). A radio transmission method in which the transmitter continuously performs rapid frequency shifts according to a preset algorithm. The receiver performs the exact same shifts to read the incoming signals. IEEE 802.11a systems can use FHSS, but IEEE 802.11b doesn't support it.
Infrared. Infrared communications use high frequencies, just below the visible light spectrum. Infrared is a 'line of sight' technology, meaning that the signals cannot penetrate through opaque walls and objects. This severely limits the utility of infrared technology, which is why it is rarely used for LAN communications, outside of simple links between computers and peripherals, such as printers and handheld devices.


IEEE 802.11 MAC Layer

Like all of the protocols developed by the IEEE 802 working groups, IEEE 802.11 splits the data-link layer into two sublayers, LLC and MAC. The LLC sublayer used to package the network layer data to be transmitted is the same for all of the IEEE 802 protocols. The IEEE 802.11 protocol's MAC sublayer defines the data, control, and management frames used by the protocol, as well as its MAC mechanism. IEEE 802.11 uses a variation on the CSMA/CD MAC mechanism used by Ethernet, called Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). CSMA/CA is similar to CSMA/CD in that computers listen to the network to see if it is in use before they send their data, and if the network is free, the transmission proceeds. Also like CSMA/CD, two computers can transmit at the same time on a CSMA/CA network, causing a collision. The difference between the two MAC mechanisms is that in a wireless environment, the CSMA/CD collision detection mechanism would be impractical, because it would require full-duplex communications. A computer on a twisted-pair ethernet network assumes that a collision has occurred when an incoming signal arrives over its receive wire pair while it's sending data over the transmit wire pair. Making wireless LAN devices that can transmit and receive signals simultaneously is far more difficult. Instead of detecting collisions as they occur, the receiving computer on a CSMA/CA network performs a CRC check on the incoming packets and, if no errors are detected, transmits an acknowledgment message to the sender. This acknowledgment serves as an indication that no collision has occurred. If the sender does not receive an acknowledgment for a particular packet, it automatically retransmits it until it either receives an acknowledgment or times out. If the sender still doesn't receive an acknowledgment after a specific number of retransmissions, it abandons the effort and leaves the error correction process to the protocols at the upper layers of the networking stack.

FDDI

Physical layer

Apart from it's speed, which was unprecedented at the time of its introduction, the use of fiber optic cable was the primary reason for FDDI's commercial success. Like other fiber optic protocols, FDDI networks can span much longer distances than copper-based networks and are completely resistant to electromagnetic interference. FDDI supports several different types of fiber optic cable, including the 62.5/125 micron multimode cable that is the industry standard for fiber optic LANs, which provides for network segments up to 100 kilometers long with up to 500 workstations placed as far as 2 kilometers apart. Singlemode fiber optic cables provide even longer segments, with up to 60 kilometers between workstations. The original FDDI standard calls for a ring topology, but unlike token ring networks, this ring is not strictly a logical one implemented in the hub. The computers are actually cabled together in a ring. To provide fault tolerance in the event of a cable break, the network is a double ring that consists of two independent rings, a primary and a secondary, with traffic flowing in opposite directions. A computer that is connected to both rings is called a dual attachment station (DAS), and when one of the rings is broken by a cable fault, the computer switches to the other ring, providing continued full access to the entire network. A double ring FDDI network in this condition is called a wrapped ring. It's also possible to cable a FDDI network in a star topology using a hub called a dual attachment concentrator (DAC). The DAC creates a single logical ring, like a token ring MAU. A computer connected to the DAC is called a single attachment station (SAS). A FDDI network can be deployed using the double ring, the star topology or both. The double ring is better suited to use as a backbone network and the star to a segment network connecting desktop computers. To construct an entire enterprise network using FDDI, you create a double ring backbone, to which you connect your servers and other vital computers as DASes. You then connect one or more DACs to the double ring, which you use to attach your workstations. This is sometimes called a dual ring of trees. The DAS servers have full advantage of the double ring's fault tolerance, as do the DACs, whereas the SAS computers attached to the DACs are connected to the primary ring only. If a cable connecting a workstation to a DAC fails, the DAC can remove it from the ring without disturbing communications to the other computers, as on a token ring network. To expand the network further, you can connect additional DACs to ports in existing DACs without limit, as long as you remain within the maximum number of computers permitted
on the network.


Frames

Preamble (PA, 8 bytes): contains series of alternating 0s and 1s, used for clock synchronization.
Starting Delimiter (SD, 1 byte): indicates the beginning of the frame.
Frame Control (FC, 1 byte): indicates the type of data found in the data field; some of the most common values are as follows:


41, 4F—Station Management (SMT) Frame: indicates that the data field contains an SMT Protocol Data Unit.
C2, C3—MAC frame: indicates that the frame is either a MAC Claim frame (C2) or a MAC Beacon frame (C3), which are use to recover from token passing errors.
50, 51—LLC frame: indicates that the data field contains application data in a standard IEEE 802.2 LLC frame.
Destination Address (DA, 6 bytes): specifies the hardware address of the computers that will receive the frame.
Source Address (SA, 6 bytes): specifies the hardware address of the system sending the frame.
Data (variable): contains network layer protocol data or an SMT header and data or MAC data, depending on the function of the frame.
Frame Check Sequence (FCS, 4 bytes): contains a cyclical redundancy check (CRC) value, used for error detection.
Ending Delimiter (ED, 4 bits): indicates the end of the frame.
End of Frame Sequence (FS, 12 bits): contains three indicators that may be modified by intermediate systems when they retransmit the packet, the functions of which are as follows:


E (Error): indicates that an error has been detected, either in the FCS or in the frame format.
A (Acknowledge): indicates that the intermediate system has determined that the frame's destination address applies to itself.
C (Copy): indicates that the intermediate system has successfully copied the contents of the frame into its buffers.


Because it is a token passing protocol, FDDI also must have a token frame, which contains only the Preamble, plus the Starting Delimiter, Frame Control, and Ending Delimiter fields, for a total of 3 bytes. The token passing mechanism used by FDDI is virtually identical to that of Token Ring, except that the early token release feature that is optional in Token Ring is standard equipment for the FDDI protocol. The third type of frame used on FDDI networks is the station management frame, which is responsible for ring maintenance and network diagnostics.

Token ring

Token ring networks use a ring topology, which is implemented logically inside the MAU. The network cables take the form of a star topology, but the MAU forwards incoming data to the next port only. This topology enables data packets to travel around the network from one workstation to the next until they arrive back at the system that originally generated them.
Token ring networks still use a shared medium, however, meaning that every packet is circulated to every computer on the network. When a system receives a packet from the MAU, it reads the destination address from the token ring header to determine if it should pass the packet up through that computer's networking stack. However, no matter what the address, the system returns the packet to the MAU so that it can be forwarded to the next computer on the ring. The physical layer specifications for token ring networks are not as numerous as are those for Ethernet and they are not as precisely standardized. The IEEE 802.5 document contains no physical layer specifications at all. Cabling guidelines are derived from practices established by IBM. Originally, the medium for token ring networks was a cable known as IBM type 1, also called the IBM Cabling System. Type 1 is a heavy, shielded twisted pair (STP) cable that is sold in various lengths, generally with connectors attached. The connector at the MAU end of the cable is a large, proprietary jack called an IBM data connector (IDC) or a universal data connector (UDC). The NICs in the computers use standard DB-9 connectors. Cables with one IDC and one DB-9 connector, which are used to connect a computer to a MAU, are called lobe cables. Cables with IDC connectors at both ends, used for connecting MAUs together, are called patch cables. Type 1 cable is thick, relatively inflexible and difficult to install in walls and ceilings because of its large, preattached connectors. Type 1 MAUs also require a special IDC key which is a separate device that you plug into each MAU port and remove to initialize the port before connecting a lobe cable to it. Today, most token ring networks use category 5 UTP cable with standard RJ-45 connectors at both ends, known in the token ring world as type 3 cabling. Type 3 networks use the same connectors for both computers and MAUs, so only one type of cable is needed. In addition, it's possible to install the network inside walls and ceilings using bulk cable and attach the connectors afterward. Type 3 MAUs also don't require a separate key, as the ports are self-initializing. The only advantages type 1 networks have over type 3 networks are that they can span longer distances and connect more workstations. A type 1 lobe cable can be up to 300 meters long, whereas type 3 cables are limited to 150 meters. Type 1 networks can have up to 260 connected workstations, whereas type 3 networks can have only 72.


Token passing

The MAC mechanism of a token ring LAN, called token passing, is the single most defining element of the network, just as CSMA/CD is for ethernet. Token passing is an inherently more efficient MAC mechanism than CSMA/CD because it provides each system on the network with an equal opportunity to transmit its data without generating any collisions and without diminished performance at high traffic levels. Other data-link layer protocols, like FDDI, also use token passing as their MAC mechanism. Token passing works by circulating a special packet called a token around the network. The token is only 3 bytes long and contains no useful data. It's only purpose is to designate which system on the network is allowed to transmit its data. In their idle state, computers on a token ring network are in what is known as repeat mode. While in this state, the computer systems receive packets from the network and immediately forward them back to the MAU for transmission to the next port. If a system doesn't return the packet, the ring is effectively broken and network communication ceases. After a designated system (called the active monitor) generates it, the token circulates around the ring from system to system. When a computer has data to transmit, it must wait for a free token to arrive before it can send its data. No system can transmit without being in possession of the token and because there is only one token, only one system on the network can transmit at any one time. This means that there can be no collisions on a token ring network unless something is seriously wrong. When a computer takes possession of the token, it changes the value of one bit (called the monitor setting bit) and forwards the packet back to the MAU for transmission to the next computer on the ring. At this point, the computer enters transmit mode. The new value of the monitor setting bit informs the other computers that the network is in use and that they can't take possession of the token themselves. Immediately after the computer transmits the 'network busy' token, it transmits it's data packet. As with the token frame transmitted immediately before it, the MAU forwards the data packet to each computer on the ring in turn. Eventually, the packet arrives back at the computer that generated it. At the same time that the sending computer goes into transmit mode, it's receive wire pair goes into stripping mode. When the data packet traverses the entire ring and returns to its source, it is the responsibility of the sending computer that generated the packet to strip it from the network. This prevents the packet from circulating endlessly around the ring. The original token ring network design calls for the system transmitting its data packet to wait for the last bit of data to arrive back at its source before it generates a new token by modifying the monitor setting bit in the token frame back to its original value and transmitting it. Today, most 16 Mbps token ring networks have a feature called early token release, which enables workstations to transmit a free token immediately after their data packets. This way, another system on the network can receive a data packet, take possession of the token, and begin transmitting its own data frame before all of the data from the first packet has returned to its source. There are parts of two data frames on the network at the same time, but there is never more than one free token.

Frames

Unlike ethernet, which uses one frame format for all communications, token ring uses four different frames: data, token, command and the abort delimiter frame. The largest and most complex of the token ring frames is the data frame. This is the frame that is most comparable to the ethernet frame, because it encapsulates the data received from the network layer protocol using a header and a footer. The other three frames are strictly for control functions, such as ring maintenance and error notification.

Functions of the fields in the data frame

Start Delimiter (1 byte): contains a bit pattern that signals the beginning of the frame to the receiving system.
Access Control (1 byte): contains bits that can be used to prioritize token ring transmissions, enabling certain systems to have priority access to the token frame and the network.
Frame Control (1 byte): contains bits that specify whether the frame is a data or a
command frame.
Destination Address (6 bytes): contains the 6 byte hexadecimal address of the network interface adapter on the local network to which the packet will be transmitted.
Source Address (6 bytes): contains the 6 byte hexadecimal address of the network interface adapter in the system generating the packet.
Information (up to 4500 bytes): contains the data generated by the network layer protocol, including a standard LLC header, as defined in IEEE 802.2.
Frame Check Sequence (4 bytes): contains a 4-byte checksum value for the packet (excluding the Start Delimiter, End Delimiter, and Frame Status fields) that the receiving system uses to verify that the packet was transmitted without error.
End Delimiter (1 byte): contains a bit pattern that signals the end of the frame, including a bit that specifies if there are further packets in the sequence yet to be transmitted and a bit that indicates that the packet has failed the error check.
Frame Status (1 byte): contains bits that indicate whether the destination system has received the frame and copied it into its buffers.


The token frame is 3 bytes long and contains only the Start Delimiter, Access Control and End Delimiter fields. The Start Delimiter and End Delimiter fields use the same format as in the data frame and the token bit in the Access Control field is set to a value of 1. The command frame (aka MAC frame because it operates at the MAC sublayer, whereas the data frame operates at the LLC sublayer) uses the same basic format as the data frame, differing only in the value of the Frame Control field and the contents of the Information field. The Information field, instead of containing network layer protocol data, contains a 2 byte major vector ID, which specifies the control function the packet is performing, followed by the actual control data itself, which can vary in length. The following major vector IDs indicate some of the most common control functions performed by these packets:

0010—Beacon: beaconing is a process by which systems on a token ring network indicate that they are not receiving data from their nearest active upstream neighbor, presumably because a network error has occurred; it enables a network administrator to more easily locate the malfunctioning computer on the network.
0011—Claim Token: this vector ID is used by the active monitor system to generate a new token frame on the ring.
0100—Ring Purge: this vector ID is used by the active monitor system in the event of an error to clear the ring of unstripped data and to return all of the systems to repeat mode.
The abort delimiter frame consists of only 2 bytes, the same Start Delimiter and End Delimiter fields and uses the same values for those fields as the data and command frames. When a problem occurs, such as an incomplete packet transmission, the active monitor system generates an abort delimiter frame to flush all existing data from the ring.

August 29, 2005

Ethernet

Standards

DIX Ethernet aka thick Ethernet, ThickNet or 10Base5
10 Mbps
RG-8 coaxial
Bus topology

DIX Ethernet II aka thin Ethernet, ThinNet, Cheapernet or 10Base2
Added physical layer: RG-58 coaxial cable

IEEE 802.3 aka 10Base-T
Added UTP

IEEE 802.3u aka Fast Ethernet
100 Mbps specifications

IEEE 802.3z and IEEE 802.3ab aka Gigabit Ethernet
1000 Mbps standards

Both the IEEE 802.3 and DIX Ethernet standards consist of the following three basic components:
physical layer specifications
frame format
CSMA/CD MAC mechanism



Designation Cable Topology Speed Length

10Base5 RG-8 coaxial Bus 10 Mbps 500 meters
10Base2 RG-58 coaxial Bus 10 Mbps 185 meters
10Base-T Category 3 UTP Star 10 Mbps 100 meters
Fiber Optic Inter-Repeater Link (FOIRL) 62.5/125 multimode fiber optic Star 10 Mbps 1000 meters
10Base-FL 62.5/125 multimode fiber optic Star 10 Mbps 2000 meters
10Base-FB 62.5/125 multimode fiber optic Star 10 Mbps 2000 meters
10Base-FP 62.5/125 multimode fiber optic Star 10 Mbps 500 meters
100Base-TX Category 5 UTP Star 100 Mbps 100 meters
100Base-T4 Category 3 UTP Star 100 Mbps 100 meters
100Base-FX 62.5/125 multimode fiber optic Star 100 Mbps 412 meters
1000Base-LX 9/125 singlemode fiber optic Star 1000 Mbps 5000 meters
1000Base-LX 50/125 or 62.5/125 multimode fiber optic Star 1000 Mbps 550 meters
1000Base-SX 50/125 multimode fiber optic (400 MHz) Star 1000 Mbps 500 meters
1000Base-SX 50/125 multimode fiber optic (500 MHz) Star 1000 Mbps 550 meters
1000Base-SX 62.5/125 multimode fiber optic (160 MHz) Star 1000 Mbps 220 meters
1000Base-SX 62.5/125 multimode fiber optic (200 MHz) Star 1000 Mbps 275 meters
1000Base-LH 9/125 singlemode fiber optic Star 1000 Mbps 10 km
1000Base-ZX 9/125 singlemode fiber optic Star 1000 Mbps 100 km
1000Base-CX 150-ohm shielded copper cable Star 1000 Mbps 25 meters
1000Base-T Category 5 (or 5E) UTP Star 1000 Mbps 100 meters


The Fast Ethernet standard defines two types of hubs, Class I and Class II, which must be marked with the appropriate Roman numeral in a circle. Class I hubs connect Fast Ethernet cable segments of different types, such as 100Base-TX to 100Base-T4 or UTP to fiber optic, whereas Class II hubs connect segments of the same type. You can have as many as two Class II hubs on a network, with a total cable length (for all three segments) of 205 meters when using UTP cable and 228 meters using fiber optic cable. Because Class I hubs must perform an additional signal translation, which slows down the transmission process, you can have only one hub on the network, with maximum cable lengths of 200 and 272 meters for UTP and
fiber optic, respectively.


Frame

Preamble (7 bytes) contains 7 bytes of alternating 0s and 1s, which the communicating systems use to synchronize their clock signals.
Start Of Frame Delimiter (1 byte) contains 6 bits of alternating 0s and 1s, followed by two consecutive 1s, which is a signal to the receiver that the transmission of the actual frame is about to begin.
Destination Address (6 bytes) contains the 6-byte hexadecimal address of the network interface adapter on the local network to which the packet will be transmitted.
Source Address (6 bytes) contains the 6-byte hexadecimal address of the network interface adapter in the system generating the packet.
Ethertype/Length (2 bytes),in the DIX Ethernet frame, this field contains a code identifying the network layer protocol for which the data in the packet is intended; in the IEEE 802.3 frame, this field specifies the length of the data field (excluding the pad).
Data And Pad (46 to 1500 bytes) contains the data received from the network layer protocol on the transmitting system, which is sent to the same protocol on the destination system; ethernet frames (including the header and footer, except for the Preamble and Start Of Frame Delimiter) must be at least 64 bytes long; so if the data received from the network layer protocol is less than 46 bytes, the system adds padding bytes to bring it up to its minimum length.
Frame Check Sequence (4 bytes) is a single field that comes after the network layer protocol data and contains a 4-byte checksum value for the entire packet; the sending computer computes this value and places it into the field; the receiving system performs the same computation and compares it to the field to verify that the packet was transmitted without error.


Addressing

The Destination Address and Source Address fields use the 6-byte hardware addresses coded into network interface adapters to identify systems on the network. Every network interface adapter has a unique hardware address (also called a MAC address), which consists of a 3-byte value called an organizationally unique identifier (OUI), which is assigned to the adapter's manufacturer by the IEEE, plus another 3-byte value assigned by the manufacturer itself.
Ethernet, like all data-link layer protocols, is concerned only with transmitting packets to another system on the local network. If the packet's final destination is another system on the LAN, the Destination Address field contains the address of that system's network adapter. If the packet is destined for a system on another network, the Destination Address field contains the address of a router on the local network that provides access to the destination network. It is then up to the network layer protocol to supply a different kind of address (such as an Internet Protocol [IP] address) for the system that is the packet's ultimate destination.


Ethertypes

The 2-byte field after the Source Address field is the primary difference between the DIX Ethernet and IEEE 802.3 standards. For any network that uses multiple protocols at the network layer, it is essential for the Ethernet frame to somehow identify which network layer protocol has generated the data in a particular packet. The DIX Ethernet frame does this simply by specifying an Ethertype in this field.

Common Ethertype values

Protocol Ethertype
IP 0800
X.25 0805
Address Resolution Protocol (ARP) 0806
Reverse ARP 8035
AppleTalk on Ethernet 809B
NetWare Internetwork Packet Exchange (IPX) 8137


IEEE 802.3 takes a different approach. In this frame, the field after the Source Address specifies the length of the data in the packet. The frame uses an additional component, the Logical Link Control (LLC), to identify the network layer protocol. The IEEE's 802 working group is not devoted solely to the development of Ethernet-like protocols. In fact, there are other protocols that fit into the IEEE 802 architecture, the most prominent of which (aside from IEEE 802.3) is IEEE 802.5, which is a Token Ring–like protocol. To make the IEEE 802 architecture adaptable to these various protocols, the data-link layer is split into two sublayers: LLC and MAC

The MAC sublayer is the part that contains the elements particular to the IEEE 802.3 specification, such as the Ethernet physical layer options, the frame, and the CSMA/CD MAC mechanism. The functions of the LLC sublayer are defined in a separate document, published as IEEE 802.2. This same LLC sublayer is also used with the MAC sublayers of other IEEE 802 protocols, such as 802.5.

The LLC standard defines an additional 3-byte or 4-byte subheader that is carried within the Data field, which contains service access points (SAPs) for the source and destination systems. These SAPs identify locations in memory where the source and destination systems store the packet data. To provide the same function as the Ethertype field, the LLC subheader can use a SAP value of 170, which indicates that the Data field also contains a second subheader called the Subnetwork Access Protocol (SNAP). The SNAP subheader is 5 bytes long and contains a 2-byte Local Code that performs the same function as the Ethertype field in the Ethernet II header. It is typical for computers on a Transmission Control Protocol/Internet Protocol (TCP/IP) network to use the Ethernet II frame because the Ethertype field performs the same function as the LLC and SNAP subheaders and saves 8 to 9 bytes per packet. Microsoft Windows servers and clients automatically negotiate a common frame type when communicating, and when you install a Novell NetWare server, you can select the frame type you want to use. There are two crucial factors to be aware of when it comes to Ethernet frame types. First, computers must use the same frame type to communicate. Second, if you are using multiple network layer protocols on your network, such as TCP/IP for Windows networking and IPX for NetWare, you must use a frame type that contains an Ethertype or its functional equivalent, such as Ethernet II or Ethernet SNAP.

CSMA/CD

The MAC mechanism is the single most defining element of the Ethernet standard. A protocol that is very similar to Ethernet in other ways, such as the short-lived 100Base-VG-AnyLAN, is placed in a separate category because it uses a different MAC mechanism. CSMA/CD may be a confusing name, but the basic concept is simple. Only when you get into the details do things become complicated. When an Ethernet system has data to transmit, it first listens to the network to see if it is in use by another system. This is the carrier sense phase. If the network is busy, the system does nothing for a given period and then checks again. If the network is free, the system transmits the data packet. This is called the multiple access phase because all of the systems on the network are contending for access to the same network medium.

Even though an initial check is performed during the carrier sense phase, it is still possible for two systems on the network to transmit at the same time, causing a collision. For example, when a system performs the carrier sense, another computer has already begun transmitting, but its signal has not yet reached the sensing system. The second computer then transmits and the two packets collide somewhere on the cable. When a collision occurs, both packets are discarded and the systems must retransmit them. These collisions are a normal and expected part of Ethernet networking, and they are not a problem unless there are too many of them or the computers are unable to detect them.

The collision detection phase of the transmission process is the most important part of the operation. If the systems can't tell when their packets collide, corrupted data may reach the destination system and be treated as valid. Ethernet networks are designed so that packets are large enough to fill the entire network cable with signals before the last bit leaves the transmitting computer. This is why Ethernet packets must be at least 64 bytes long, systems pad out short packets to 64 bytes before transmission, and the Ethernet physical layer guidelines impose strict limitations on the lengths of cable segments. As long as a computer is still in the process of transmitting, it is capable of detecting a collision on the network. On a UTP or fiber optic network, a computer assumes that a collision has occurred if it detects signals on both its transmit and receive wires at the same time. On a coaxial network, a voltage spike indicates the occurrence of a collision. If the network cable is too long or if the packet is too short, a system might finish transmitting before the collision occurs.

When a system detects a collision, it immediately stops transmitting data and starts sending a jam pattern instead. The jam pattern serves as a signal to each system on the network that a collision has taken place, that it should discard any partial packets it may have received, and that it should not attempt to transmit any data until the network has cleared. After transmitting the jam pattern, the system waits a specified period of time before attempting to transmit again. This is called the backoff period, and both of the systems involved in a collision compute the length of their own backoff periods using a randomized algorithm called truncated binary exponential backoff. They do this to try to avoid causing another collision by backing off for the same period of time. Because of the way CSMA/CD works, the more systems you have on a network or the more data the systems transmit over the network, the more collisions there are. Collisions are a normal part of Ethernet operation, but they cause delays, because systems have to retransmit packets. When the number of collisions is minimal, the delays aren't noticeable, but when network traffic increases, the number of collisions increases, and the accumulated delays can begin to have a palpable effect on network performance. For this reason, it is not a good idea to run an Ethernet network at high traffic levels. You can reduce the traffic on the network by installing a bridge or switch or by splitting it into two LANs and connecting them with a router.

August 21, 2005

Directory services

A directory service is a database of user accounts and other information that network administrators use to control access to shared network resources. When users connect to a network, they have to be authenticated before they can access network resources. On a peer-to-peer network, each computer maintains its own user accounts and security settings, whereas client/server networks rely on a centralized security database or directory service.
Flat file directory services are suitable for relatively small installations, but for large enterprise networks, they are difficult to maintain. For this reason, both Novell and Microsoft have developed hierarchical directory services that can support networks of virtually any size and have the fault tolerance and security capabilities needed for large installations.


The NetWare Bindery
The bindery - included in all versions of NetWare up to and including version 3.2 - is a simple database that contains a list of user and group accounts, information about those accounts, and little else. Every NetWare bindery server maintains its own list of accounts, which it uses to authenticate users trying to access its resources. If network users need to access files or printers on more than one NetWare server, they must have an account on each server and each server performs its own user authentication.

Novell Directory Services
NetWare 4.0, released in 1993, was the first version to include NDS, which at that time stood for NetWare Directory Services, but is now Novell DirectoryServices. NDS was the first hierarchical directory service to be a commercial success. In the years since its initial release, it has matured into a robust enterprise network solution.

A hierarchical directory service is composed of objects, which are arranged in a treelike structure. There are two basic kinds of objects: containers and leaves. Containers are the equivalent of directories in a file system: they hold other objects. Leaves represent network resources, such as users, groups, computers and applications. All objects are composed of attributes (which NDS calls properties), the nature of which depends on the object's type. For example, the properties of a user object can specify the user's name, password, telephone number, e-mail address etc.

The types of objects that you can create in the NDS tree and the properties of those object types are determined by the directory schema. Network applications can modify the schema to create their own specialized object types or add new properties to existing object types. This makes the directory service a flexible tool for application developers. For example, a network backup program can create an object type used to represent a job queue, which contains a list of backup jobs waiting to be executed as one of its properties. Deploying the directory service is a matter of designing and building an NDS tree, which involves the creation of a hierarchy of containers into which administrators put the various leaf objects. The tree design can be based on the geographical layout of the network, with containers representing buildings, floors and rooms or it can be based on the structure of the organization using the network, with containers representing divisions, departments and workgroups. An NDS tree can also use a combination of the two or any other organizational paradigm the administrator chooses. The important part of the design process is grouping together users with similar network access requirements to simplify the process of assigning them permissions. Like a file system, permissions flow down through the NDS tree and are inherited by the objects beneath. Granting a container object permission to access a particular resource means that all of the objects in that container receive the same permission. Unlike the NetWare bindery, which is server-specific, there is usually only one NDS database for the entire network. When a user logs on, he or she logs on to NDS, not a specific server and one authentication can grant the user access to resources located anywhere on the network. This means that administrators need only create and maintain one account for each user instead of one for each server the user accesses, as in bindery-based NetWare. Because the entire NetWare network relies on NDS, the directory is designed with features that ensure its availability at all times. One can split the NDS database into partitions, which are stored on different servers, to make it easy for a user to log on using a nearby server. In addition, one can create replicas of the partitions and store those on different servers as well. In this way, if a server containing all or part of the NDS tree fails, users can still access the directory from another server.

Windows NT Domains
Windows NT uses a directory service that is more capable and more complex than the NetWare bindery, but it is still not suitable for a large enterprise network. Windows NT networks are organized into domains, which contain accounts that represent the users, groups and computers on the network. A domain is a flat-file database like a bindery, but it is not server-specific. The domain directory is stored on Windows NT servers that have been designated as domain controllers during the operating system installation.

A server can be a Primary Domain Controller (PDC) or a Backup Domain Controller (BDC). Most domains have at least two domain controllers for fault-tolerance purposes. Each domain has one PDC, which contains the main copy of the domain directory. Domains can have any number of BDCs, each of which contains a replica of the domain. Whenever network administrators modify the directory they are making changes to the files on the PDC, which holds the master copy of the data. At periodic intervals, the PDC replicates the directory database to the BDCs which keeps them updated with the latest information. This process is called single master replication.


It's common for larger Windows NT networks to have multiple domains that can communicate with each other. For this to occur, administrators must create trust relationships between the domains, using a utility called the User Manager for Domains. Trust relationships operate in one direction only. Because you have to create trust relationships manually, managing a large enterprise Windows NT network with many domains can be labor intensive. Users who have to access resources in multiple domains must have a separate account in each domain, just as users of bindery-based NetWare need a separate account on each server.

Active Directory
Microsoft introduced an enterprise directory service in the Windows 2000 Server product line called the Active Directory service. This directory service is similar in structure to NDS in that it uses a hierarchical tree design comprised of container and leaf objects. The fundamental unit oforganization in Active Directory is still the domain, but now you can groupdomains together into a tree and even group multiple trees together into a forest. Domains that are in the same tree automatically have bidirectional trust relationships established between them, eliminating the need for administrators to create them manually. The trust relationships are also transitive.

In Windows NT, the domain structure is completely separate from the concept of DNS domains, but in the Active Directory architecture, the two are more similar. Domains in the same tree are named using multiword domain names (just as in DNS) that reflect the tree structure of the directory. The Active Directory architecture still uses domain controllers like Windows NT, but one has a great deal more flexibility in their configuration. In Windows 2000, you can promote any server to a domain controller at any time or demote it back to a standard server. In addition, there are no more PDCs and BDCs. All domain controllers on an Active Directory network function as peers. Administrators can make changes to the Active Directory data on any domain controller and the servers propagate those changes to the other domain controllers throughout the network. This is called multiple master replication.

What the name of the utility that enables administrators to promote Win 2000 servers to domain controllers?

Network clients

Windows 95, 98, Me, NT and 2000 include everything for connection to a Windows network, including a complete client networking stack, that consists of the following major components.


Clients
What these operating systems often call a 'client' is actually a component called a redirector. A redirector is a module that receives requests for file system resources from an application and determines whether the requested resource is located on a local or network drive. It's the redirector that enables opening a network file as easily as opening a local file.

Protocol drivers.
The Windows protocol drivers implement the protocol suites required for network communications, such as TCP/IP, IPX, or NetBIOS Enhanced User Interface (NetBEUI). In Windows terminology, the singular word protocol is used to refer to components such as TCP/IP and IPX, both of which are actually suites consisting of several different protocols. There are also other software components running on the system (for example, Ethernet) that Windows doesn't refer to as protocols, but that actually are.

Network interface adapter drivers.
The network interface adapter driver is a Windows device driver that provides the connection between the network interface adapter and the rest of the networking stack. The combination of the network interface adapter and its driver implement the data-link layer protocol used by the system, such as Ethernet or Token Ring. Windows supports network interface adapters that conform to the Network Driver Interface Specification (NDIS). The various operating systems use different NDIS driver versions.

Services.
Although they are not essential to client functionality, there are services included in Windows that provide additional networking capabilities. For example, to share resources on a Windows system, File and Printer Sharing for Microsoft Networks service must be installed.

Together with the network interface adapter, these software components provide the functions of all seven layers of the OSI model. A system can have more than one of each component installed, providing alternative paths through the networking stack for different applications. Most of the Windows operating systems include two redirectors. There might be one for Windows networking and one for connecting to NetWare servers. The operating systems include multiple protocol drivers for the same purpose. NetWare connectivity traditionally requires the IPX protocol (although the latest versions of NetWare do support TCP/IP) and a Windows network can use TCP/IP or NetBEUI. Windows and NetWare systems usually share the same network medium.

The protocols at the various layers specify the path up or down through the OSI model. When a packet arrives at a workstation from the network, the Ethernet frame contains a code that specifies the network layer protocol that it should use. The network layer protocol header then specifies a transport layer protocol and the transport layer header contains a port number that identifies the application that should receive the data.

To successfully compete with Novell back in he nineties, Windows had to be able to access NetWare resources and Microsoft developed the NetWare clients for Windows. Novell subsequently released clients of their own, which shipped with NetWare. Even today, one can choose between the Microsoft client for NetWare that ships with Windows or Novell's client, which can be downloaded from their website.

Microsoft Clients for NetWare
The NetWare clients from Microsoft provided in the Windows operating systems fit into the same networking architecture as the client for Microsoft networking. To access NetWare resources in Windows 2000 Professional, the Client Service for NetWare (CSNW) and the NWLink IPX/SPX/NetBIOS Compatible Transport Protocol must be installed. In Windows 95, 98 or Me, the names of the modules are slightly different: client for NetWare Networks and
IPX/SPX-compatible protocol.

The CSNW module is a second redirector that can be used along with- or instead of - the Microsoft networking client. When an application requests access to a network resource, the system determines whether the request is for a Windows or NetWare file and sends it to the appropriate redirector. The NWLink protocol module is a reverse engineered version of Novell's IPX protocols. In most cases, Windows systems use the IPX protocols only to access NetWare servers. The NetWare redirector is connected to the NWLink protocol module and the Microsoft redirector uses TCP/IP or NetBEUI. Both protocols' modules are then connected to the same network interface adapter driver.

Using the Gateway Service for NetWare
The CSNW included with Windows 2000 Professional and Windows NT Workstation provides basic NetWare connectivity, but Windows 2000 Server and Windows NT Server include the Gateway Service for NetWare (GSNW), which expands this functionality. In addition to providing client access to NetWare servers, GSNW also enables Windows systems without an installed NetWare client to access NetWare resources. Once GSNW is installed, the service's client capabilities enable it to connect to NetWare servers. GSNW can then be configured to share those NetWare resources using the system's Microsoft networking capabilities. When a Windows client accesses the share on the Windows NT or Windows 2000 server, the server accesses the files on the NetWare server and relays them to the client.

Novell Clients for NetWare
The Microsoft and Novell clients both provide the same basic functionality, such as access to NetWare volumes and printers and to NDS, but Novell's clients also provide additional capabilities that are helpful to administrators and power users. The primary difference between the Microsoft and Novell clients is that the Novell clients include the NetWare Administrator application, which is the tool that administrators use to create and maintain objects in the NDS database. They also provide additional file management functions and utilities.

Novell Clients for Windows 95, 98, NT and 200 consist of modules that fit into the existing Windows networking architecture. Each client includes its own redirector- a genuine Novell IPX protocol module, rather than Microsoft's compatible version - and network interface adapter drivers that conform to the Open Data-link Interface (ODI) standard used by Novell. However, the client can use the NDIS drivers supplied with Windows, if one is already installed.

Macintosh
Macs can access network resources hosted by virtually any server operating system, but this is generally not due primarily to the capabilities of the Macintosh or the MacOS operating system. To support Macintosh clients software modules must be installed on the client or a server. The Windows and NetWare server operating systems have had the ability to support AppleTalk protocols for a long time, either built in to the server operating system or through an add-on product. Windows requires installation of Microsoft Services for Macintosh on Windows NT or 2000 servers to support Macintosh clients. This product installs support for AppleTalk protocols on the server and makes it possible for Macintosh systems to store their files on Windows servers in their native file format. However, Microsoft Services for Macintosh does not permit Macintosh computers to share their own resources with Windows clients. The relationship between the Macintoshes and Windows machines is strictly client/server.

Macs can access NetWare servers in three ways. NetWare ships with support for AppleTalk protocols. When one installs AppleTalk on a NetWare server, Macintosh clients can access the server using their built-in networking capabilities. One can also install the Novell Client for MacOS on a Macintosh computer, which provides it with support for the IPX protocols. A newer product, called Novell Native File Access for Macintosh, enables Macintosh computers to access NetWare drives using the AppleTalk Filing Protocol (AFP) over TCP/IP. No additional client is required on the Macintosh computer.

Macs can access UNIX systems using the standard TCP/IP communication tools that UNIX workstations use among themselves. Virtually all TCP/IP implementations include FTP and Telnet clients and Macintosh systems can use these to access a UNIX computer just like another UNIX computer would.

UNIX
UNIX or Linux is capable of functioning as clients of virtually any other operating system. The Windows, NetWare and Macintosh operating systems do not include native UNIX clients per se, but there are server capabilities built into all of these products that UNIX computers can access and there are add-ons that provide more comprehensive client access. Because all of the UNIX and Linux variants are based on the TCP/IP protocols, they all include the standard TCP/IP client programs, such as FTP and Telnet. This means that a UNIX client computer can connect to any system running the server versions of these applications. Some of the server operating systems also include other UNIX-compatible services. For example, Windows 2000 includes native support for the line printer remote (LPR) and line printer daemon (LPD) services, which enables Windows and UNIX computers to share printers with each other. However, to provide more complete client connectivity for UNIX computers, most of the server operating systems require the installation of an add-on product. Microsoft Windows Services for UNIX, for example, provides a Windows computer with NFS client and server capabilities, which makes it possible for Windows and UNIX computers to mount each other's file systems. The product also includes a Telnet client and server, Authentication Tools for NFS, a Remote Shell Service, and other UNIX-style utilities. Novell also has a product that provides NFS client and server capabilities, called
NetWare NFS Services.


How can Windows Remote Shell Service be installed?

August 16, 2005

Network operating systems

Windows NT and Windows 2000

File Systems
All network operating systems include a service that makes file sharing possible. One of the most important elements of file sharing is the ability to restrict access to the server files. Windows NT and Windows 2000 both include the NT file system (NTFS) that is specifically designed for this purpose. The MS-DOS–based versions of Windows use the file allocation table (FAT) file system, and Windows NT and Windows 2000 support FAT, too. You can share FAT drives with other users on the network, but the FAT file system's security capabilities are extremely limited. When you create NTFS drives during a Windows NT or Windows 2000 installation, you can grant access permissions for specific files and folders to the users and groups on your network with great precision. NTFS also supports larger amounts of storage than FAT drives.

Services
A service is a program that runs continuously in the background while other operations are running at the same time. Most of the networking capabilities in Windows NT and Windows 2000, and particularly the server functions, are provided by services. In most cases services can be configured to load when the system boots and they remain loaded and running even when users log on and off.

The following services are the core of the operating system's networkingcapabilities:

Server: enables the system to share its resources, such as files and printers.
Workstation: enables the system to access the shared resources on another computer.
Computer Browser: maintains a list of the shared resources on a network.
Messenger: enables the system to display pop-up messages about activities on other systems.
Alerter: notifies selected users of administrative alerts that occur on the system.
Netlogon: provides secure channels between Windows computers for communications related to the authentication process.

The following services are optional:

Internet Information Service (IIS): provides internet services, such as web and FTP servers.
Windows Internet Naming Service (WINS): resolves NetBIOS names into IP addresses.
Domain Name System (DNS) server: esolves DNS host names into IP addresses.
Dynamic Host Configuration Protocol (DHCP): configures TCP/IP settings on client systems .
Routing and Remote Access Service (RRAS): enables a server to route traffic between two LANs or a WAN and a LAN and provides support for various routing protocols.
Distributed file system (Dfs): enables shared drives on servers all over the network to appear to clients as a single combined share.
Microsoft Cluster Server: enables systems running Windows NT 4 Enterprise Server or Windows 2000 Advanced Server to operate as part of a cluster, a group of servers that work together to provide increased performance and fault tolerance.

Novell NetWare
NetWare is strictly a client/server operating system and not DOS-based. NetWare is a network operating system that was originally designed primarily to provide clients with access to file and print services, and these remain NetWare's primary strengths. Novell Directory Services (NDS) is a full-featured directory service that was released in 1993 and Microsoft's Active Directory was released in 2000. Like Windows NT and Windows 2000, NetWare has its own file system that enables you to control access to the server resources with great precision. You can assign access permissions based on either bindery accounts or NDS objects, depending on which version of NetWare you are using. The NetWare file system consists of volumes that you create on server drives. By adding specialized components called name space modules, you can create NetWare volumes that support various client file systems, such as Windows Virtual File Allocation Table (VFAT), Macintosh, and Network File System (NFS). This enables clients to store their files on NetWare servers using their own native formats.

NetWare Protocols
Unlike Windows NT, Windows 2000, and UNIX, which have long since adopted the TCP/IP suite as their native protocol, NetWare still relies heavily on IPX. Fortunately, Microsoft has developed its own protocol, called NWLink, to be compatible with IPX. All of the Windows operating systems can use NWLink to access shared NetWare resources.

NetWare Services
In addition to its file and print services, the latest versions of the software include many other services, such as the following:
Novell Storage Services (NSS): 64-bit, indexed storage service to create unlimited number of logical volumes up to 8 terabytes in size.
Novell Distributed Print Services (NDPS): network printing architecture that replaces NetWare's traditional queue-based printing with a single printer object in NDS that provides simplified, centralized administration.
NetWare Internet servers: Web, FTP, News, and Multimedia Servers and Web Search Server that indexes Web sites for easier client access.
DNS and DHCP servers: NetWare now supports TCP/IP in addition to IPX, and it includes DNS and DHCP servers that can resolve host names into IP addresses and configure TCP/IP clients, all from the NetWare platform.
Multiprotocol WAN router: a service that enables a NetWare server to route multiple network layer protocols between two LANs or between a LAN and a WAN.

UNIX
UNIX is a network operating system originally developed in the 1970s, now available in dozens of different versions and variants.
UNIX System V: the descendent of the original UNIX development program started by AT&T in the 1970s. The UNIX trademark has changed and UNIX System V is now owned by The Santa Cruz Operation, Inc. (SCO).
Berkeley Software Distribution (BSD) UNIX: one of the first variants to splinter off from the original AT&T development effort and it has become one of the most consistently popular UNIX products. The most popular BSD UNIX versions today are FreeBSD, OpenBSD, and NetBSD, all of which are open source products.
Sun Solaris: Sun Microsystems markets Solaris, one of the most popular and user-friendly commercial UNIX operating systems available. Solaris is essentially a modified version of BSD UNIX with elements of SVR4, one of the progenitors of UNIX System V. Solaris also includes Open Windows, one of the better graphical interfaces for UNIX.
Linux: a UNIX-based subculture unto itself, in that there are many different versions, both free and commercial. Originally developed as a school project by a student named Linus Torvalds, Linux is the quintessential open source operating system, because its development and maintenance was almost totally a noncommercial collaboration until quite recently. There are now some Linux versions sold as commercial products with documentation and technical support, but others are still available free of charge.
Hardware-specific UNIX variants: several manufacturers of computer hardware have developed their own UNIX variants, designed specifically to run on their computers. These include Hewlett Packard's HP-UX and IBM's Advanced Interactive Executive (AIX).

Whereas NetWare runs solely on computers with Intel processors and Windows NT and Windows 2000 run on the Intel and Alpha platforms, the various UNIX operating systems run on computers with a wide variety of processors, including Intel, Alpha, Sun Microsystems' proprietary SPARC processor and others.

UNIX is primarily an application server platform, typically associated with Internet services, such as Web, FTP, and e-mail servers. As with Windows NT and Windows 2000 systems, UNIX systems can function as both servers and clients simultaneously. You can use UNIX as a general-purpose LAN server, but it is much more difficult to install and administer than either Windows or NetWare. There are UNIX programs that provide the file and print services needed by LAN users, such as the NFS and the line printer daemon (LPD), but they are far from being as easy to use as their Windows NT, Windows 2000 and NetWare equivalents. NetWare's strength is in file and print services, and the strength of UNIX is in it's network application capabilities. Windows NT and Windows 2000 fall somewhere between the two.

UNIX operating systems use the peer-to-peer networking model and are based on a small kernel, similar in most of the variants, which is enhanced by the addition of processes such as applications and services. Some of the services that provide UNIX with its networking capabilities are common to nearly all of the UNIX versions, such as NFS which enables systems to share and access shared files and familiar networking tools like FTP and Telnet. Because these services are based on TCP/IP protocol standards, other operating systems can use them to interact with
UNIX computers.

Macintosh
Apple Macintosh computers have included networking capabilities virtually since their inception. Macintosh computers have long included a network interface called a LocalTalk adapter as part of their standard equipment and the MacOS operating system includes a proprietary protocol suite called AppleTalk. AppleShare is a file and printer sharing solution that enables a Macintosh computer to function as a server and provides the security features needed to password-protect data resources and monitor network activity. The computers on a Macintosh network are divided into zones, which are essentially organizational units that make it easier to locate network resources.

Apple moved away from their proprietary solutions toward recognized standards. One can now run network interface adapters that use Ethernet and Token Ring on Macintosh systems using data-link layer protocols called EtherTalk and Token Talk, respectively. In addition, Apple has ceased development of the AppleTalk protocols and is concentrating more on TCP/IP for network transport services, using products such as Apple Open Transport and AppleShare IP. Because of the universal desire to connect to the Internet, MacOS now uses TCP/IP as its default network protocol suite.

What is a daemon (LPD)?

August 15, 2005

Routing

Packet Routing
Routers aka gateways are more selective than hubs, bridges, and switches about the packets they forward to other ports. They don't forward broadcast messages, except in certain specific cases. A router forwards a packet based on the destination address in the network layer protocol header, which specifies the packet's ultimate destination and not the hardware address used at the data-link layer. A router has a routing table that contains information about the networks around it and it uses this table to determine where to send each packet. If the packet is destined for a system on one of the networks to which the router is connected, the router transmits the packet directly to that system. If the packet is destined for a system on a distant network, the router transmits the packet across one of the adjacent networks to another router. One of the primary functions of a router is to select the most efficient path to a destination based on the data in its routing tables and number of hops.

In addition to connecting networks at a single location, such as a corporate internetwork, routers can also connect distant networks using WAN links. Organizations with multiple branch offices often connect the networks in those offices by installing a router at each location and connecting the routers together using leased telephone lines or some other WAN technology, such as frame relay. Because each location has a separate broadcast domain, the only packets that pass over the WAN links are those destined for systems on the other networks. This minimizes the amount of traffic passing over those links and their cost. The most common use for a WAN router is connecting a network to an Internet service provider (ISP), providing the computers on the network with access to internet. The router is configured to forward all traffic not destined for the local network to the ISP, which relays it to internet.


Routing Tables
Unlike bridges and switches, routers cannot compile routing tables from the information in the data packets they process. This is because the routing table contains more detailed information than is found in a data packet and also because the router needs the information in the table to process the first packets it receives after being activated. A router can't forward a packet to all possible destinations the way a bridge can.

Static routing is the process of creating routing table entries manually. A network administrator decides what the router should do when it receives packets addressed to systems on a particular network and adds entries to the routing table that reflect these decisions.

The alternative to static routing is dynamic routing, in which routers use specialized protocols to exchange information about themselves and the networks around them. Routers have direct information about the LANs to which they are connected and they use routing protocols to send that information to other routers. When the routers on an internetwork share the contents of their tables using these protocols, all of the routers can have information about more distant networks as well.

Part of a router's function is to select the most efficient route to each packet's destination. On a relatively small internetwork there is only one possible route to any particular destination. However, on a more complex network, administrators often install more than one router on each network to provide alternate routes in case of a malfunction. When multiple routes to a particular destination exist, routers include all of them in their routing tables, along with a value called a metric that specifies the relative efficiency of each route. The nature of the metric depends on the routing protocol used to generate it. In some cases, the metric is simply the number of hops between the router and the destination network. Other protocols use more complex computations to determine the metric.

Many routers are special and expensive devices, but the Internet Connection Sharing (ICS) feature in Windows enables computers to function as a router as well. Every computer with a TCP/IP client has a routing table in it, even those that are not strictly functioning as routers. For example, when one uses a computer on a LAN to connect to the Internet with a dial-up connection, the system uses its routing table to determine whether requests for network resources should go to the NIC providing the LAN connection or to the modem providing the Internet connection. Even though the system is not providing Internet access to the LAN, it still uses the routing table.

What's a well known alternative for measuring routing efficiency in number of hops?

Switching

Because switches forward data to a single port only, they have two advantages. No collisions occur during unicast transmissions because every pair of computers on the network has what amounts to a dedicated cable segment connecting them. Thus, a switch practically eliminates unnecessary traffic congestion on the network. Another advantage of switching is that each pair of computers has the full bandwidth of the network dedicated to it. This improves the performance of the network without the need for any workstation modifications. In addition, some switches operate in full-duplex mode, which means that two computers can send traffic in both directions at the same time using separate wire pairs within the network cable. This doubles the throughput of a 10 Mbps network to 20 Mbps.

Installing switches
Switches are often found on large networks, where they're used instead of bridges or routers. On a routed network, the backbone must carry the internetwork traffic generated by all the segments. This can lead to heavy traffic on the backbone, even if it uses a faster medium than the segments. On a switched network, computers can be connected to individual workgroup switches, which are in turn connected to a high-performance backbone switch. Any computer on the network can open a dedicated channel to any other computer, even when the data path runs through several switches. Switching enables computers to communicate directly with other computers, without the need for a shared backbone network

The problem with replacing all of the routers on a large internetwork with switches is one huge broadcast domain instead of several small ones. The issue of collision domains is no longer a problem because there are far fewer collisions. However, switches relay every broadcast generated by a computer anywhere on the network to every other computer, which increases the number of unnecessary packets processed by each system.


With a virtual LAN you can create subnets on a switched network that exist only in the switches themselves. The physical network is still switched, but administrators can specify the addresses of the systems that are to belong to a specific subnet. These systems can be located anywhere because the subnet is virtual and not constrained by the physical layout of the network. When a computer on a particular subnet transmits a broadcast message, the packet goes only to the computers in that subnet, rather than being propagated throughout the entire network. Communication between subnets can be either routed or switched, but all traffic within a
VLAN is switched.


Layer 3 switching is a variation on the VLAN concept that minimizes the amount of routing needed between the VLANs. When communication between systems on different VLANs is required, a router establishes a connection between the systems and then the switches take over. Routing occurs only when absolutely necessary.

A cut-through switch forwards packets immediately by reading the destination address from their data-link layer protocol headers as soon as they're received and relaying the packets out through the appropriate port with no additional processing. The switch doesn't even wait for the entire packet to arrive before it begins forwarding it. In most cases, cut-through switches use a hardware-based mechanism that consists of a grid of input/output (I/O) circuits that enable data to enter and leave the switch through any port. This is called matrix switching
or crossbar switching.


A store-and-forward switch waits until an entire packet arrives before forwarding it to its destination. This type of unit can be a shared-memory switch, which has a common memory buffer that stores the incoming data from all of the ports, or a bus architecture switch, with individual buffers for each port, connected by a bus. While the packet is stored in the switch's memory buffers, the switch takes the opportunity to verify the data by performing a cyclical redundancy check (CRC). The switch also checks for other problems peculiar to the data-link layer protocol involved, which may result in malformed frames. This checking naturally introduces additional latency into the packet forwarding process.

What are malformed frames called?

August 14, 2005

Bridging

Bridging is a technique used to connect networks at the data-link layer. A bridge provides packet filtering at the data-link layer. It only propagates the packets that are destined for the other side of the network. If you have a large LAN that is experiencing excessive collisions or delays due to high traffic levels, you can reduce the traffic by splitting the network
in half with a bridge.

A collision domain is a network (or part of a network) that is constructed so that when two computers transmit packets at precisely the same time, a collision occurs. When you add a new hub to an existing network, the computers connected to that hub become part of the same collision domain as the original network because hubs relay the signals that they receive immediately upon receiving them, without filtering packets.
Bridges do not relay signals to the other network until they have received the entire packet. For this reason, two computers on different sides of a bridge that transmit at the same time do not cause a conflict. The two network segments connected by the bridge are in different collision domains. On an Ethernet network, collisions are a normal and expected part of network operations, but when the number of collisions grows too large, the efficiency of the network decreases because more packets must be retransmitted. When the network is split into two collision domains with a bridge, the reduction in traffic on the two network segments results in fewer collisions, fewer retransmissions, and an improved efficiency.

The broadcast domain is another important concept in bridging technology. A broadcast message is a packet with a special destination address that causes it to be read and processed by every computer that receives it. By contrast, a unicast message is a packet addressed to a single computer on the network and a multicast message is addressed to a group of computers on the network. A broadcast domain is a group of computers that all receive a broadcast message transmitted by any one of the computers in the group.
Broadcasts are a crucial part of the networking process. The most common method computers use to locate a particular system on a LAN is to transmit a broadcast that checks if any computer on the LAN has a specified IP address or NetBIOS. From the reply message, the broadcaster can determine the desired destination computer's hardware address and send subsequent packets to it as unicasts.

Adding a bridge separates a network into two different collision domains, but the segments on both sides of the bridge remain part of the same broadcast domain because the bridge always relays all broadcast messages from both sides. The retention of a single broadcast domain is what enables the two network segments to remain part of the same LAN. Using a bridge is not like using a router, which separates the segments into two independent LANs with separate collision and broadcast domains.

Transparent Bridging
Bridges maintain an internal address table that lists the hardware addresses of the computers on both segments. When the bridge receives a packet and reads the destination address in the data-link layer protocol header, it checks that address against its lists. If the address is associated with a segment other than that from which the packet arrived, the bridge relays it to that segment. Originally, network administrators had to manually create the lists of hardware addresses for each segment connected to the bridge. Today bridges use a technique called transparent bridging to automatically compile their own address lists. When you activate a transparent bridge for the first time, it begins processing packets. For each incoming packet, the bridge reads the source address in the data-link layer protocol header and adds it to the address list for the network segment over which the packet arrived. It is common for network administrators to install multiple bridges between network segments to provide redundancy in case of an equipment failure. However, this practice can cause data loss when multiple bridges process the same packets and determine that the source computer is on two different network segments. In addition, it's possible for multiple bridges to forward broadcast packets around the network endlessly, in what is called a bridge loop. To prevent these problems, bridges communicate among themselves using a protocol known as the spanning tree algorithm (STA), which selects one bridge to process the packets. All other bridges on that network segment remain idle until the first one fails.

The standard type of bridge is called a local bridge. This is the simplest type of bridge because it doesn't modify the data in the packets. It simply reads the addresses in the data-link layer protocol header and passes the packet on or discards it.

A translation bridge is a data-link layer device that connects network segments using different network media or different protocols. This bridge is more complicated than a local bridge because, in addition to reading the headers in the packet, the bridge strips the data-link layer frame off the packets to be relayed to other network segments and packages them in a new frame for transmission on the other segment. The bridge can thus connect an Ethernet segment to a Fiber Distributed Data Interface (FDDI) segment or connect two different types of Ethernet segments while retaining a single broadcast domain. Because of the additional packet manipulations, translation bridging is slower than local bridging.

A remote bridge is designed to connect two network segments at distant locations using some form of wide area network (WAN) link. The link can be a modem connection, leased telephone line or any other type of WAN technology. The advantage of using a bridge in this manner is that you reduce the amount of traffic passing over the WAN link, which is usually far slower and more expensive than the local network.

What's token ring's alternative for transparent bridging?

Hubs

Hub functions and services

Repeater
A hub amplifies and repeats incoming signals, before transmission to all other systems. The maximum segment length for a UTP cable on an Ethernet network is 100 meters. Since a hub is a repeater the distance between two computers on a LAN with one hub can be 200m.

Store and forward
A hub contains buffers in which it can retain packets to retransmit them out through specific ports as needed. This is one step short of a switch, which reads the destination address from each incoming packet and transmits it only to the system for which it is intended.

Monitoring
Some intelligent hubs also include management features that enable them to monitor the operation of each of the hub's ports. In most cases, an intelligent hub uses the Simple Network Management Protocol (SNMP) to transmit periodic reports to a centralized network
management console.


Expansion
Hubs can be used to expand network. When one connects another hub to the uplink port of a four port hub, more than four computers can be part of the LAN.

Crossover circuit
Another function of a hub is to provide the crossover circuit that connects the transmit pins to the receive pins for each connection between two computers. The uplink port is the one port in the hub that does not have the crossover circuit. When you connect the uplink port in one hub to a regular port in another, you enable the computers on one hub to connect to those on the other, with only a single crossover between them. Without the uplink port, connecting one hub to another would cause a connection between computers on different hubs to go through two crossover circuits, canceling each other out.

UTP cables contain eight wires in four pairs, and each pair consists of a signal wire and a ground. Computers transmit data over one wire pair and receive data over another. In most cases, the other two pairs of wires are left unused. For two computers to communicate, the transmit contacts on each system must be connected to the receive contacts on the other system. In all but exceptional cases, UTP cables are wired straight through, meaning that each of the eight pins in the connector at one end of the cable is wired to the corresponding pin in the connector at the other end. If you were to use a cable like this to connect two computers, you would have the transmit pins connected to the transmit pins and the receive pins to the receive pins, making communication impossible.

MAUs used on Token Ring networks may look similar to Ethernet hubs, but they are not repeaters. They perform certain data-link layer functions that are crucial to network operation. The primary difference in the operation of an MAU is that it does not retransmit all incoming traffic out through the other ports simultaneously. Instead of parallel, MAUs transmit a packet serially. This process enables the computers in a physical star topology to communicate as though they are cabled in a ring topology. Token Ring computers perform an initialization process at startup that informs the MAU of their presence. Once the MAU receives the proper signals from the NIC in the computer, it figuratively adds the system to the logical ring and begins forwarding packets to it. Ports to which no computer is connected are never added to the ring, and the MAU skips them when forwarding packets. Token Ring MAUs have dedicated Ring In and Ring Out ports that are used to connect one MAU to another.

What are MAU ports called to which no computer is connected?

Troubleshooting NICs

Checkpoints network communication error

1. Network cable (connection)
2. Network interface adapter driver
3. Network configuration parameters (installation of protocols and clients)

4. Diagnostic software that tests the functions of the card.
5. Hardware resource configuration

If the NIC diagnostics indicate that the card is functioning properly and, assuming that the software providing the upper layer protocols is correctly installed and configured, the problem is probably caused by the hardware resource configuration. Either there is a resource conflict between the network interface adapter and another device in the computer or the network interface adapter is not configured to use the same resources as the network interface adapter driver. The configuration utility supplied with the adapter shows what resources the network interface adapter is physically configured to use. This information must be compared with the driver configuration. Settings of the card or the driver or even those of another device in the computer might need adjusting to accommodate the card.

What's another common cause of malfunctioning network communication?

IRQs and DMA channels

Configuring a network interface adapter is a matter of configuring it to use certain
hardware resources

Interrupt requests (IRQs): hardware lines that peripheral devices use to send signals to the system processor, requesting its attention.

Input/output (I/O) port addresses: locations in memory are assigned for use by particular devices to exchange information with the rest of the computer.

Memory addresses: areas of upper memory are used by particular devices, usually for installation of a special-purpose basic input/output system (BIOS).

Direct memory access (DMA) channels: system pathways used by devices to transfer information to and from system memory.

Network interface adapters do not usually use memory addresses or DMA channels, but they always require an IRQ and an I/O port address to communicate with the computer. Improper network interface adapter configuration is one of the main reasons a computer fails to communicate with the network. For a network interface adapter to communicate with the computer in which it is installed, the hardware (adapter) and the software (driver) must both be configured to use the same resources. On older NICs hardware resources are configured by installing jumper blocks or setting Dual Inline Package (DIP) switches. Newer NICs use proprietary software supplied by the manufacturer to set the card's resource settings. This makes it easier to reconfigure the settings in the event of a conflict. Determining the right resource settings for the NIC used to be a trial-and-error process. The Device Manager utility on newer systems lists the resource settings for all of the components in the computer and can even inform you when a newly installed NIC is experiencing a resource conflict. It can be used to find out which device the NIC is conflicting with and which resource needs to be adjusted.

What modern technology makes configuration of NICs easy?

Network interface card

The network interface adapter (called the NIC when installed in a computer's expansion slot) provides the link between a computer and the network. In most cases the NIC plugs into the system's Industry Standard Architecture (ISA), Peripheral Component Interconnect (PCI), or PC Card bus. The network interface itself is a cable jack such as RJ45 for UTP and BNC or AUI for a coaxial cable connections, but it can also be a wireless transmitter.

NICs perform most of functions at the data-link and physical layer.

Data encapsulation
The network interface adapter and its driver build the frame around the network layer data. The network interface adapter also reads the contents of incoming frames and passes data to the appropriate network layer protocol.

Signal encoding and decoding
The network interface adapter implements the physical layer encoding scheme that converts the binary data generated by the network layer - now encapsulated in the frame - into electrical voltages, light pulses, or whatever other signal type the network medium uses and converts received signals to binary data for use by the upper layer protocols.

Data transmission and reception
The primary function of the network interface adapter is to generate and transmit signals of the appropriate type over the network and to receive incoming signals. The nature of the signals depends on the network medium and the data-link layer protocol. On a typical LAN, every computer receives all of the packets transmitted over the network. The network interface adapter examines the data-link layer destination address in each packet to see if it is intended for that computer. If so the network interface adapter passes the packet to the computer for processing by the next layer in the protocol stack, if not, the network interface adapter discards the packet.

Data buffering
Network interface adapters transmit and receive data one frame at a time, so they have built-in buffers that enable them to store data arriving either from the computer or from the network until a frame is complete and ready for processing.

Serial/parallel conversion
The communication between the computer and the network interface adapter usually runs in parallel mode (either 16 or 32 bits at a time), depending on the bus the adapter uses. Only USB adapters communicate with the computer serially. Network communications, however, are serial (running one bit at a time), so the network interface adapter is responsible for performing the conversion between the two types of transmissions.

Media Access Control (MAC)
The network interface adapter also implements the MAC mechanism that the data-link layer protocol uses to regulate access to the network medium. The nature of the MAC mechanism depends on the protocol used.

What's the main disadvantage of USB NICs?
Why is PCI preferable to ISA?
What technology is common for laptops?

August 13, 2005

Coaxial, twisted-pair and fiber optic

Coaxial
Coaxial uses two conductors: one to transmit data the other functions as the cable's ground. It is used in bus topologies.
RG-8 aka thick Ethernet and 10Base5: 0.405 inches; N connectors.
RG-58 aka thin Ethernet and 10Base2: 0.195 inches; bayonet-Neill-Concelman [BNC] connectors.

Twisted-pair
The twist prevents cross-talk and the connectors are called RJ45. They are the same as the RJ11 connectors used on standard telephone cables, except that they have eight electrical contacts instead of four or six.
Unshielded Twisted Pair (UTP): commonly used in star topologies; eight separate copper conductors, each separately insulated.
Shielded Twisted Pair (STP): commonly used in electromagnetic environments; two pairs of conductors with each pair insulated.

UTP categories by Electronics Industry Association and the Telecommunications Industry Association (EIA/TIA)

1. Voice-grade telephone networks only; not for data transmissions
2 . Voice-grade telephone networks, as well as IBM dumb-terminal connections to mainframe computers.
3 . Voice-grade telephone networks, 10-Mbps Ethernet, 4-Mbps Token Ring, 100Base-T4 Fast Ethernet, and 100Base-VG-AnyLAN.
4 . 16-Mbps Token Ring networks.
5. 100Base-TX Fast Ethernet (100 Mbps), Synchronous Optical Network (SONET), and Optical Carrier (OC3) Asynchronous Transfer Mode (ATM).
5e. 1000Base-T (Gigabit Ethernet) networks.


STP types by IBM
1. Type 1A: long runs; two pairs of 22 gauge solid wires with foil shielding.
2. Type 6A: patch cables; two pairs of 26 gauge stranded wires with foil or mesh shielding.

Token Ring STP networks also use large, bulky connectors called IBM data connectors (IDCs). However most Token Ring LANs today use UTP cable.

Fiber optic
Benefits: (1) less attenuation (weakening of signals); signals can travel up tp 120 km. compared to 500 in copper wire; (2) resistant to electromagnetic interference; (3) more secure since it's impossible to tap data.

Fiber optic types
Singlemode: 8.3/125 singlemode fiber; single-wavelength laser.
Multimode: 62.5/125 multimode fiber; light-emitting diode (LED).

Fiber optic cables use one of two connectors: the straight tip (ST) connector or the subscriber connector (SC).

What's the maximum speed of fiber optic?

Topologies

There are six cabling or wiring network patterns.

Bus
Thick Ethernet networks use a single length of coaxial cable with computers connected to it using smaller individual cables called Attachment Unit Interface (AUI) cables.
Thin Ethernet networks use separate lengths of a narrower coaxial cable and each length of cable connects one computer to the next.
When a computer transmits data the packets travel in every direction and reach every other compter. they are terminated at the end of the cable. Nowadays bus networks aren't popular due to their lack of fault tolerance. One faulty connector, break in the cable or terminator affects the functionality of the entire network. Signals that can not pass through a certain point in the network will not reach any system after that point. Additionally a break results in signals not being terminated and in signal reflection.

Star
Just like the bus topology the star network has computers transmit data to all other network components. There exist several important differences as well: (1) the main component of a bus topology is the hub, which forwards all received data to all computers; (2) the cable type, unshielded twisted pair (UTP); and (3) the fault tolerance. When a single connector, cable or computer fails, it doesn't impact the rest of the network.

Hierarchical star
When one wants to create a star network with more computers than ports in the hub, one can connect extra hubs using the uplink port (branching tree network). A standard 10 Mbps Ethernet network can support up to four hubs and Fast Ethernet network can generally support only two.

Ring
Although the ring topology may look like the star topology and although it passes packet to the next computer just like the bus topology there are differences. The ring topology uses a special hub, called Multistation Access Unit (MAU), that receives data from system one, transmits it to system 2, receives it from system two and transmits it to the next system until computer one receives the data and removes it from the network. The ring topology doesn't use terminators and the MAU doesn't transmits signals simultaneously. Since the ring topology is physically designed as a star, special circuitry in the MAU enables the network to function logically as a ring even when one computer or connector fails. The only data-link layer protocol that allows a physical ring is FDDI. To provide fault tolerance FDDI uses a double ring.

Mesh
For LAN the mesh topology is a theoretical concept rather than a real option. Since mesh topologies connect each computer to every other computer, the computers would need an extra network interface adapter for every computer in a network of more than two systems.
For internetworks mesh is a real option. Redundant routers create multiple paths between systems and provide fault tolerance for failing hubs, routers and cables.

Wireless
Ad hoc topologies consist of computers with wireless network interface adapters. In these small networks computers can communicate freely with each other.
In infrastructure topologies, the wireless computers do not communicate directly with each other, but with the cabled network via network access points. This topology is better suited to a larger network that has only a few wireless computers, such as laptops belonging to traveling users. These users have no need to communicate with each other. They use wireless technology to access servers and other resources on the corporate network.

What's the maximum speed or wireless networking?

August 9, 2005

Application layer

The application layer is the entrance point of programs to the OSI model. Most application layer protocols provide services that programs use to access the network, such as the Simple Mail Transfer Protocol (SMTP), which most e-mail programs use to send e-mail messages. In some cases, as with File Transfer Protocol (FTP), the application layer protocol is a program in itself.

What functions do application layer protocols often include?

Presentation layer

The presentation layer has one function: the translation of syntaxes between different systems. Computers don't necessarily always use the same (abstract) syntax. The presentation layer enables them to negotiate a common (transfer) syntax for the network communications. When called for, the systems can select a transfer syntax that provides additional services such as data compression or encryption. The receiving computer translates the transfer syntax back into it's abstract syntax.

Where is the presentation layer defined?

Session layer

The session layer provides 22 services. The two most important are dialog control and dialog separation. Dialog control is the selection of a mode that systems use to exchange messages. The systems can select two-way alternate (TWA) mode or two-way simultaneous (TWS) mode. In TWA mode the systems exchange a token and only the system in possession of this token is permitted to transmit data. In TWS mode systems can transmit data at any time, even simultaneously.

Dialog separation is the process of creating checkpoints in a data stream that enable communicating systems to synchronize their functions. The difficulty of checkpointing depends on whether the dialog is in TWA or TWS mode. TWA dialogs perform minor synchronisations that require only a single exchange of checkpointing messages, but TWS dialogs perform major synchronization using a major/activity token.

What are examples of separate session layer protocols?

Transport layer

Transport layer protocols complement network layer protocols by establishing a connection before data transmission (three way handshake). These protocols, for example TCP, are called connection-oriented. Furthermore they provide additional services such as packet acknowledgment, data segmentation, flow control and end-to-end error detection and correction. Connection oriented protocols are used for transmission of large amount of data that can't tolerate a single bit error. The disadvantage of such reliable protocols is the amount of extra data traffic caused by large headers (TCP has a 20 byte header).

A well known connectionless protocol is User Datagram Protocol (UDP) which has an 8 byte header. These protocols are used for brief transactions that consist of single requests and responses. The data are transmitted without verifying if the receiving system is ready.
Both types also contain destination and source information in the form of port
numbers of applications.


What transport protocols does the IPX suite provide?

August 8, 2005

Network layer

The network layer (IP, IPX or NetBEUI) has four functions: addressing, fragmenting, routing and identifying the transport layer.

The network layer header contains the source address and the ultimate destination address. When one types the URL of a webpage in a browser the network layer header specifies the address of the webserver and the data-link layer specifies the router that connect the PC or LAN to internet. This data-link layer header can change frequently before the packet reaches the ultimate destination. IP addresses are specified by network administrators or DHCP and identify the network and the system on the network. IPX identifies the network and uses the hardware address to locate the system on the network. NetBEUI locates computers by using the NetBIOS names assigned during installation.

Fragmentation is splitting datagrams into smaller packets. This is necessary because datagrams pass through many different network with different protocols. For instance, a 4500 bytes packet from a token ring network that has to go through an ethernet network must be split in three packets of 1500 bytes each, wich is the maximum size for the ethernet data-link protocol. Fragmented packets can be fragmented further and are reassembled when they reach their
final destination.

Routing enables datagrams to use the most efficient path to their ultimate destination.

The network header identifies the transport layer protocol so that the receiving end-system can pass the datagram to the correct transport layer protocol.

How does routing enable efficient data traffic?

Data-link layer

The data-link layer ecapsulates data from the network layer by adding a header and a footer. The data-link frame contains the source and destination hardware addresses assigned to network interface adapters by manufacturers. These addresses always refer to systems on the same LAN.

The data-link layer (ethernet, token ring and FDDI) defines the Media Access Control (MAC) mechanism (CSMA/CD, CSMA/CA or token passing), identifies the network layer protocol that generated the data field and provides error detection in the form of a Cyclical
Redundancy Check (CRC).

The MAC mechanism provides systems with an equal opportunity to transmit data while minimising packet collisions. Packet collisions occur when two systems on a half-duplex network transmit data at the same time.

The data-link performs a CRC on the data from the network layer and adds the result to the footer. The receiving systems performs the same CRC and compares it with the value in the footer. If the values don't match then the data is corrupt and will be discarded.

What's the main difference between the data-link and network layer?

Physical layer

The physical layer of the Open Systems Interconnection (OSI) model defines: the topology (bus, star, and ring), the kind of hardware (network interface adapters, hubs etc.), the network medium (copper, optic fiber or wireless) and the signaling scheme. The signaling scheme is the pattern of electrical charges or light pulses used to encode the binary data generated by the upper layers. Ethernet uses a Manchester encoding and Token Ring uses
Differential Manchester.

The physical layer is defined by the data-link layer for LAN and a document called EIA/TIA 568A. Data-link layer protocols for WAN, such as the Serial Line Internet Protocol (SLIP) and the Point-to-Point Protocol (PPP) don't include physical layer information.

EIA/TIA 568A aka "Commercial Building Telecommunications Cabling Standard" was published by the American National Standards Institute (ANSI), the Electronics Industry Association (EIA) and the Telecommunications Industry Association (TIA). This document includes detailed specifications for installing cables for data networks in a commercial environment, including the required distances from sources of electromagnetic interference and other general cabling policies.

Why does the data-link layer for LAN define the physical layer?

August 7, 2005

Broadband & baseband

A coaxial TV cable carries signals for dozens of TV channels simultaneously and often provides internet access as well. A cable that carries multiple signals at the same time is a broadband connection. Another example of broadband technology is telephony. In Public Switched Telephone Network (PSTN) the caller establishes a connection (circuit) to the receiver. This circuit remains intact during the call. The network uses circuit switching to enable more phonecalls.

Most LAN's use baseband technology which means that the cable connecting systems carries one signal at a time. Instead of circuit-switching LAN's use packet-switching.

What is this and why is it used on LAN's?