Queuing Design Principles
The only way to provide QoS service guarantees to business-critical applications is to enable queuing to every node that has the potential for congestion. Queuing should be enabled regardless of whether congestion is occurring rarely or frequently. Although frequently deployed at the WAN edge, this principle must be applied not only to congested WAN links but also within the campus network. Speed mismatch, link aggregation, and link subscription ratios can create congestion in the network devices by filling up queuing buffers.
Because each distinctive application class requires unique QoS service requirements, it is recommended you provide a distinctive queue for each traffic class. One of the main justifications for leveraging distinctive queues is that each QoS service class can accept certain QoS-enabled behaviors such as bandwidth allocation and dropping ratios.
It is recommended you use a minimum of four standards-based queuing behaviors on all platforms and service provider links when deploying end-to-end QoS across the network infrastructure:
RFC 3246 Expedited Forwarding PHB (used for real-time traffic)
RFC 2597 Assured Forwarding PHB (used for guaranteed bandwidth queue)
RFC 2474 Default Forwarding PHB (default nonprioritized queue, best effort)
RFC 3662 Lower Effort Per-Domain Behavior (less than best-effort queue, bandwidth constrained)