This chapter covers the following topics:
Identifying Network Requirements and the Need for Quality of Service (QoS)
Understanding Why QoS Is Needed in Networks That Have Ample Bandwidth
Describing the IntServ and DiffServ QoS Architectures
Understanding the QoS Components: Classification, Marking, Traffic Conditioning, Congestion Management, and Congestion Avoidance
Applying QoS in Each Submodule of the Enterprise Composite Network Model
Introducing WAN QoS Features Such As Low-Latency Queuing (LLQ) and IP RTP Priority
Cisco Catalyst switches provide a wide range of QoS features that address the needs of voice, video, and data applications sharing a single infrastructure. Cisco Catalyst QoS technology lets you implement complex networks that predictably manage services to a variety of networked applications and traffic types.
Using the QoS features and services in Cisco IOS and Cisco CatOS software, you can design and implement networks that conform to either the Internet Engineering Task Force (IETF) integrated services (IntServ) model or the differentiated services (DiffServ) model. Cisco switches provide for differentiated services using QoS features such as classification and marking, traffic conditioning, congestion avoidance, and congestion management.
Table 10-1 indicates the predominant Catalyst switches at the time of publication. Each Catalyst switch supports each QoS component with specific restrictions and caveats. For brevity, this chapter focuses strictly on QoS using Cisco IOS. In addition, the examples, caveats, and restrictions discussed in this chapter involve the Catalyst 3550 and 6500 families of switches. Refer to the product-configuration guides or release notes on other Catalyst switches for the latest information regarding QoS-supported features and configurations. For information regarding Catalyst QoS using Cisco CatOS, refer to the configuration guides for those specific Catalyst switches on Cisco.com.
Table 10-1 Leading Catalyst Switches Applicable to This Chapter
Cisco IOSBased Catalyst Switches |
Catalyst 2940, 2950, 2955, 2970 |
Catalyst 3550, 3560, and 3750 |
Catalyst 4000 or 4500 with Supervisor II+, III, IV, or V |
Catalyst 6500 with Supervisor Engine I with MSFC or MSFC2 |
Catalyst 6500 with Supervisor Engine II with MSFC2 |
Catalyst 6500 with Supervisor Engine 720 with MSFC3 |
Cisco IOS on routers and switches supports many QoS capabilities, including the following:
Control over resourcesYou have control over which network resources (bandwidth, equipment, wide-area facilities, and so on) are being used. For example, critical traffic such as voice, video, and data may consume a link with each type of traffic competing for link bandwidth. QoS helps to control the use of the resources (for example, dropping low-priority packets), thereby preventing low-priority traffic from monopolizing link bandwidth and affecting high-priority traffic such as voice traffic.
More efficient use of network resourcesBy using network analysis management and accounting tools, you can determine how traffic is handled, and which traffic experiences latency, jitter, and packet loss. If traffic is not handled optimally, you can use QoS features to adjust the switch behavior for specific traffic flows.
Tailored servicesThe control and visibility provided by QoS enables Internet service providers to offer carefully tailored grades of service differentiation to their customers. For example, a service provider can offer different SLAs for a customer website that receives 30004000 hits per day, compared to another customer site that receives only 200300 hits per day.
Coexistence of mission-critical applicationsQoS technologies make certain that mission-critical applications that are most important to a business receive the most efficient use of the network. Time-sensitive multimedia and voice applications require bandwidth and minimized delays, while other applications on a link receive fair service without interfering with mission-critical traffic.
This chapter discusses the preceding features with respect to Catalyst switches. This chapter begins with a discussion of the need for QoS in networks with sufficient bandwidth. It is a common misconception that QoS is not necessary in networks that have adequate bandwidth. Following that section, this chapter discusses QoS components and features that are available on Catalyst switches from the perspective of the Catalyst 6500 and 3550 families of switches. Later sections discuss recommendations for deploying the QoS components in different submodules of the Enterprise Composite Network Model. This chapter also includes a brief section on several WAN QoS features that are applicable to WAN interfaces on modules on Catalyst switches.
The Need for QoS
As introduced in the preceding section, even with adequate bandwidth available throughout a multilayer switched network, several network design properties may affect performance. The following network design properties may result in congestion even with networks of unlimited bandwidth; Figure 10-1 illustrates these network design properties.
Figure 10-1 The Need for QoS
Ethernet speed mismatchNetwork congestion may occur when different-speed network devices are communicating. For example, a Gigabit Ethernetattached server sending traffic to a 100-Mbps Ethernet-attached server may result in congestion at the egress interface of the 100-Mbps Ethernet-attached server due to buffer limitations of the switch.
Many-to-one switching fabricsNetwork congestion may occur when aggregating many-to-one switches. For example, when aggregating multiple access-layer switches into a distribution-layer switch, the sum of the switching fabric bandwidth of all access-layer switches generally exceeds the switch fabric capability of the distribution-layer switch.
AggregationNetwork congestion may occur when multiple Ethernet-attached devices are communicating over Ethernet through a single connection or to a single network device or server.
Anomalous behaviorNetwork congestion may occur because of anomalous behavior or events. Faulty hardware or software on any network device may cause a broadcast storm or other type of network storm yielding congestion on multiple interfaces. In this context, faulty software includes computer worms and viruses, which may cause packet storms that congest enterprises and even service provider networks. QoS can mitigate and control the behavior of the network during these types of anomalous events well enough that VoIP phone calls continue unaffectedeven during an anomalous packet storm caused via an Internet wormuntil the anomaly is resolved.
Congestion greatly affects the network availability and stability problem areas, but congestion is not the sole factor for these problem areas. All networks, including those without congestion, may experience the following three network availability and stability problems:
Delay (or latency)The amount of time it takes for a packet to reach a destination.
Delay variation (or jitter)The change in interpacket latency within a stream over time.
Packet lossThe measure of lost packets between any given source and destination.
NOTE
These factors are extremely crucial in deploying AVVID applications. Each network service places different expectations on the network. For example, VoIP applications require low latency and steady jitter factors, whereas storage protocols such as FCIP and iSCSI require very low packet loss but are less sensitive to jitter.
Latency, jitter, and packet loss may occur even in multilayer switched networks with adequate bandwidth. As a result, each multilayer switched network design needs to include QoS. A well-designed QoS architecture aids in preventing packet loss while minimizing latency and jitter. The following sections discuss these factors in more detail, as well as other benefits of QoS such as security and mitigating the effects of viruses and worms by using traffic conditioning.
Latency
End-to-end delay, or latency, between any given sender and receiver comprises two types of delay:
Fixed-network delayIncludes encoding and decoding time and the latency required for the electrical and optical signals to travel the media en route to the receiver. Generally, applying QoS does not affect fixed-network delay because fixed-network delay is a property of the medium. Upgrading to higher-speed media such as 10 Gigabit Ethernet and newer network hardware with lower encoding and decoding delays, depending on application, may result in lower fixed-network delay.
Variable-network delayRefers to the network conditions, such as congestion, that affect the overall latency of a packet in transit from source to destination. Applying QoS does affect the variable-network delay.
In brief, the following list details the types of delay that induce end-to-end latency (note that the first four types of delay are fixed delay):
Packetization delayAmount of time that it takes to segment, sample, and encode signals, process data, and turn the data into packets.
Serialization delayAmount of time that it takes to place the bits of a packet, encapsulated in a frame, onto the physical media.
Propagation delayAmount of time it takes to transmit the bits of a frame across the physical wire.
Processing delayAmount of time it takes for a network device to take the frame from an input interface, place it into a receive queue, and place it into the output queue of the output interface.
Queuing delayAmount of time a packet resides in the output queue of an interface.
Of all the delay types listed, queuing is the delay over which you have the most control with QoS features in Cisco IOS. The other types of delay are not directly affected by QoS configurations. For this reason, this chapter focuses mostly on queuing delay.
Jitter
Jitter is critical to network operation in maintaining consistent data rates. All end stations and Cisco network devices use jitter buffers to smooth out changes in arrival times of data packets that contain data, voice, and video. However, jitter buffers are only able to compensate for small changes in latency of arriving packets. If the arrival time of subsequent packets increases beyond a specific threshold, a jitter buffer underrun occurs. During jitter buffer exhaustion, there are no packets in the buffer to process for a specific stream. For example, if a jitter buffer underrun occurs while you are using an audio application to listen to an Internet radio station, the audio application stops playing music until additional packets arrive into the jitter buffer.
In contrast, when too many packets arrive too quickly, the jitter buffers may fill and be unable to handle any further traffic. This condition is called a buffer overrun. In this condition, for example, an audio application will skip parts of the audio file. This is a result of the audio player always having packets to play but missing several packets of the audio stream. With regards to VoIP phone calls, jitter buffer underruns and buffer overruns are usually intolerable, making the calling experience difficult.
Packet Loss
Packet loss is a serious issue in multilayer switched networks. Packet loss generally occurs on multilayer switches mainly when there is a physical-layer issue such as an Ethernet duplex mismatch or an interface output queue full condition. A common scenario for packet loss is when an output queue is full of packets to be transmitted where there is no additional memory space for additional ingress packets; this condition is commonly referred to as an output queue full condition. In this condition, the network device that is queuing the packets has no choice but to drop the packet. For example, an output queue full condition can occur where a sender attached to an interface of a higher speed is sending to a receiver attached to an interface of a lower speed. Eventually, the output queue buffers become full, resulting in dropped packets. A specific example of this behavior is where a server attached to a Catalyst switch at Gigabit Ethernet is transmitting to a client workstation attached at 100-Mbps Fast Ethernet. In this situation, when the server sends data at a sustained rate higher than 100 Mbps, the output queue for the workstation fills and eventually the switch drops egress frames. Several QoS options are available to apply deterministic behavior to the packet drops.
QoS-Enabled Solutions
In brief, QoS addresses latency, jitter, and packet-drop issues by supporting the following components and features on Cisco network devices:
Classifying and marking traffic such that network devices can differentiate traffic flows
Traffic conditioning to tailor traffic flows to specific traffic behavior and throughput
Marking traffic rates above specific thresholds as lower priority
Dropping packets when rates reach specific thresholds
Scheduling packets such that higher-priority packets transmit from output queues before lower-priority packets
Managing output queues such that lower-priority packets awaiting transmit do not monopolize buffer space
Applying QoS components and features to an enterprise or service provider network provides for deterministic traffic behavior. In other words, QoS-enabled infrastructures allow you to do the following:
Predict response times for end-to-end packet flows, I/O operations, data operations, transactions, etc.
Correctly manage and determine abilities of jitter-sensitive applications such as audio and video applications
Streamline delay-sensitive applications such as VoIP
Control packet loss during times of inevitable congestion
Configure traffic priorities across the entire network
Support applications or network requirements that entail dedicated bandwidth
Monitor and avoid network congestion
This chapter discusses in later sections how to apply QoS features to achieve deterministic traffic behavior on Catalyst switches. The next section discusses the two QoS service models that are the building blocks of any QoS implementation.