Introduction
Quality of service (QoS) and MPLS are, at a political level, similar. They're both technologies that have been gaining popularity in recent years. They both seem to be technologies that you either love or hatesome people are huge QoS fans, and others can't stand it. The same is true of MPLSsome people like it, and others don't.
At a technical level, though, QoS and MPLS are very different.
QoS is an umbrella term that covers network performance characteristics. As discussed in Chapter 1, "Understanding Traffic Engineering with MPLS," QoS has two parts:
Finding a path through your network that can provide the service you offer
Enforcing that service
The acronym QoS in respect to IP first showed up in RFC 1006, "ISO Transport Service on Top of the TCP: Version 3," published in 1987. The term QoS has been around for even longer, because it is a general term used to describe performance characteristics in networks. In the IP and MPLS worlds, the term QoS is most often used to describe a set of techniques to manage packet loss, latency, and jitter. QoS has been rather appropriately described as "managed unfairness": If you have contention for system resources, who are you unfair to, and why?
Two QoS architectures are in use today:
Integrated Services (IntServ)
Differentiated Services (DiffServ)
For various reasons, IntServ never scaled to the level it needed to get to for Internet-size networks. IntServ is fine for small- to medium-sized networks, but its need to make end-to-end, host-to-host, per-application microflows across a network means it can't grow to the level that large service provider networks need.
DiffServ, on the other hand, has proven quite scalable. Its use of classification on the edge and per-hop queuing and discard behaviors in the core means that most of the work is done at the edge, and you don't need to keep any microflow state in the core.
This chapter assumes that you understand QoS on an IP network. It concentrates on the integration of MPLS into the IP QoS spectrum of services. This means that you should be comfortable with acronyms such as CAR, LLQ, MDRR, MQC, SLA, and WRED in order to get the most out of this chapter. This chapter briefly reviews both the DiffServ architecture and the Modular QoS CLI (MQC), but see Appendix B, "CCO and Other References," if you want to learn more about the portfolio of Cisco QoS tools.
QoS, as used in casual conversation and in the context of IP and MPLS networks, is a method of packet treatment: How do you decide which packets get what service?
MPLS, on the other hand, is a switching method used to get packets from one place to another by going through a series of hops. Which hops a packet goes through can be determined by your IGP routing or by MPLS TE.
So there you have itMPLS is about getting packets from one hop to another, and QoS (as the term is commonly used) is what happens to packets at each hop. As you can imagine, between two complex devices such as QoS and MPLS, a lot can be done.
This chapter covers five topics:
The DiffServ architecture
DiffServ's interaction with IP Precedence and MPLS EXP bits
The treatment of EXP values in a label stack as packets are forwarded throughout a network
A quick review of the Modular QoS CLI (MQC), which is how most QoS features on most platforms are configured
Where DiffServ and MPLS TE intersectthe emerging DiffServ-Aware Traffic Engineering (DS-TE) devices and how they can be used to further optimize your network performance
DiffServ and MPLS TE
It is important to understand that the DiffServ architecture and the sections of this chapter that cover DiffServ and MPLS have nothing to do with MPLS TE. DiffServ is purely a method of treating packets differently at each hop. The DiffServ architecture doesn't care what control plane protocol a given label assignment comes from. Whether it's RSVP or LDP, or BGP, or something else entirely, the forwarding plane doesn't care. Why does this chapter exist then, if it's not about TE? Partly because MPLS TE and DiffServ treatment of MPLS packets go hand in hand in many network designs, and partly because of the existence of something called DS-TE, discussed later in this chapter.
The DiffServ Architecture
RFC 2475 defines an architecture for Differentiated Serviceshow to use DiffServ Code Point (DSCP) bits and various QoS mechanisms to provide different qualities of service in your network.
DiffServ has two major components:
Traffic conditioningIncludes things such as policing, coloring, and shaping. Is done only at the edge of the network.
Per-hop behaviorsEssentially consist of queuing, scheduling, and dropping mechanisms. As the name implies, they are done at every hop in the network.
Cisco IOS Software provides all sorts of different tools to apply these architecture pieces. You can configure most services in two waysa host of older, disconnected, per-platform methods, and a newer, unified configuration set called the MQC. Only MQC is covered in this chapter. For information on the older configuration mechanisms, see Appendix B or the documentation on CCO. Not all platforms support MQC, so there might be times when you need to configure a service using a non-MQC configuration method; however, MQC is where all QoS configuration services are heading, so it's definitely worth understanding.
Traffic conditioning generally involves classification, policing, and marking, and per-hop behaviors deal with queuing, scheduling, and dropping. Each of these topics are discussed briefly.
Classification
The first step in applying the DiffServ architecture is to have the capability to classify packets. Classification is the act of examining a packet to decide what sort of rules it should be run through, and subsequently what DSCP or EXP value should be set on the packet.
Classifying IP Packets
Classifying IP packets is straightforward. You can match on just about anything in the IP header. Specific match capabilities vary by platform, but generally, destination IP address, source IP address, and DSCP values can be matched against. The idea behind DSCP is discussed in the section "DiffServ and IP Packets."
Classifying MPLS Packets
The big thing to keep in mind when classifying MPLS packets is that you can't match on anything other than the outermost EXP value in the label stack. There's no way to look past the MPLS header at the underlying IP packet and do any matching on or modification of that packet. You can't match on the label value in the top of the stack, and you can't match on TTL (just as you can't match on IP TTL). Finally, you can't do any matching of EXP values on any label other than the topmost label on the stack.
Policing
Policing involves metering traffic against a specified service contract and dealing with in-rate and out-of-rate traffic differently. One of the fundamental pieces of the DiffServ architecture is that you don't allow more traffic on your network than you have designed for, to make sure that you don't overtax the queues you've provisioned. This is generally done with policing, although it can also be done with shaping.
Policing is done on the edge of the network. As such, the packets coming into the network are very often IP packets. However, under some scenarios it is possible to receive MPLS-labeled packets on the edge of the network. For example, the Carrier Supporting Carrier architecture (see Appendix B) means that a provider receives MPLS-labeled packets from a customer.
Marking
The marking configuration is usually very tightly tied to the policing configuration. You can mark traffic as in-rate and out-of-rate as a result of policing traffic.
You don't need to police in order to mark. For example, you can simply define a mapping between the IP packet's DSCP value and the MPLS EXP bits to be used when a label is imposed on these packets. Another possibility is to simply mark all traffic coming in on an interface, regardless of traffic rate. This is handy if you have some customers who are paying extra for better QoS and some who are not. For those who are not, simply set the EXP to 0 on all packets from that customer.
Being able to set the EXP on a packet, rather than having to set the IP Precedence, is one of the advantages of MPLS. This is discussed in more detail in the sections "Label Stack Treatment" and "Tunnel Modes."
Queuing
Queuing is accomplished in different ways on different platforms. However, the good news is that you can treat MPLS EXP just like IP Precedence.
Multiple queuing techniques can be applied to MPLS, depending on your platform and code version:
First In First Out (FIFO)
Modified Deficit Round Robin (MDRR) (GSR platforms only)
Class-Based Weighted Fair Queuing (CBWFQ) (most non-GSR platforms)
Low-Latency Queuing (LLQ)
FIFO exists on every platform and every interface. It is the default on almost all of those interfaces.
MDRR, CBWFQ, and LLQ are configured using the MQC, just like most other QoS mechanisms on most platforms. Just match the desired MPLS EXP values in a class map and then configure a bandwidth or latency guarantee via the bandwidth or priority commands. The underlying scheduling algorithm (MDRR, CBWFQ/LLQ) brings the guarantee to life.
Queuing is one of two parts of what the DiffServ architecture calls per-hop behaviors (PHBs). A per-hop behavior is, surprisingly, a behavior that is implemented at each hop. PHBs have two fundamental piecesqueuing and dropping.
Dropping
Dropping is the other half of DiffServ's PHB. Dropping is important not only to manage queue depth per traffic class, but also to signal transport-level backoff to TCP-based applications. TCP responds to occasional packet drops by slowing down the rate at which it sends. TCP responds better to occasional drops than to tail drop after a queue is completely filled up. See Appendix B for more information.
Weighted Random Early Detection (WRED) is the DiffServ drop mechanism implemented on most Cisco platforms. WRED works on MPLS EXP just like it does on IP Precedence. See the next section for WRED configuration details.
As you can see, implementing DiffServ behavior with MPLS packets is no more and no less than implementing the same behavior with IP.