Enterprise Deployment of CallManager Clusters
This section provides an overview of the ways in which you can deploy CallManager throughout your enterprise. It addresses network infrastructure, admissions control, and supported CallManager topologies.
The excellent Cisco Solutions Reference Network Design guide "IP Telephony SRND," available at http://www.cisco.com/go/srnd, addresses all of the content in this section in far greater detail. The contents of this section have been stolen shamelessly from it. If you are already thoroughly acquainted with the aforementioned Cisco document, you might want to skip the rest of this chapter. In any case, we strongly recommend you read the document to supplement the information contained here.
This section covers two main topics:
- "Network Topologies" describes the supported deployment strategies for a CallManager network.
- "Quality of Service (QoS)" describes the methods by which you can ensure that voice traffic does not experience degradation when the network becomes congested.
Network Topologies
CallManager can be deployed in several different topologies. This section provides an overview of the following topologies:
- Single-site model
- Multiple-site model with independent call processing
- Multiple-site IP WAN model with distributed call processing
- Multiple-site model with centralized call processing
- Combined multiple-site model
Single-Site Model
The single-site model consists of a single site or campus served by a LAN. A cluster of up to ten servers (one dedicated to the Publisher database, one dedicated to the TFTP service, and eight running the CallManager service) provides telephony service to up to 30,000 IP-enabled voice devices within the campus. Calls outside of the campus environment are served by IP-to-Public Switched Telephone Network (PSTN) gateways. Because bandwidth is often overprovisioned and undersubscribed on the LAN, there is usually no need to worry about admissions control.
Figure 1-18 depicts the single-site model.
Figure 1-18 Single-Site Model
Multiple-Site Model with Independent Call Processing
The multiple-site model consists of multiple sites or campuses, each of which runs an independent cluster of up to ten servers. Each cluster provides telephony service for up to 30,000 IP-enabled voice devices within a site. Because bandwidth is often overprovisioned and undersubscribed on the LAN, there's usually no need to worry about admissions control.
IP-to-PSTN gateways handle calls outside or between each site. The multiple-site model with independent call processing allows you to use the same infrastructure for both your voice and data. However, because of the absence of an IP WAN, you cannot take advantage of the economies of placing voice calls on your existing WAN, because these calls must pass through the PSTN.
Figure 1-19 depicts the multiple-site model with independent call processing.
Figure 1-19 Multiple-Site Model with Independent Call Processing
Multiple-Site IP WAN Model with Distributed Call Processing
From CallManager's point of view, the multiple-site IP WAN model with distributed call processing is identical to the multiple site model with independent call processing. From a practical point of view, they differ markedly.
Whereas the multiple-site model with independent call processing uses only the PSTN for carrying voice calls, the multiple-site IP WAN model with distributed call processing uses the IP WAN for carrying voice calls when sufficient bandwidth is available. This allows you to take advantage of the economies of routing calls over the IP WAN rather than the PSTN.
In such a case, you can set up each site with its own CallManager cluster and interconnect the sites with PSTN-enabled H.323 routers, such as Cisco 2600, 3600, and 5300 series routers. Each cluster provides telephony service for up to 30,000 IP-enabled voice devices. You can add other clusters, which allows your network to support vast numbers of users.
This type of deployment allows you to bypass the public toll network when possible and guarantees that remote sites retain survivability should the IP WAN fail. Using an H.323 gatekeeper allows you to implement a QoS policy that guarantees the quality of voice calls between sites. The same voice codec must apply to all intersite calls. Two chief drawbacks of this approach are increased complexity of administration, because each remote site requires its own database, and less feature transparency between sites.
Because each site is an independent cluster, for all users to have access to conference bridges, MOH, and transcoders, you must deploy these resources in each site. Figure 1-20 presents a picture of the multiple-site IP WAN model with distributed call processing.
Figure 1-20 Multiple-Site IP WAN Model with Distributed Call Processing
Multiple-Site Model with Centralized Call Processing
In a multiple-site model with centralized call processing, a CallManager cluster in a centralized campus processes calls placed by IP telephony devices both in the centralized campus and in remote sites connected by an IP WAN. This type of topology is called a hub-and-spoke topology: The centralized campus is the hub, and the branch offices sit at the end of IP WAN spokes radiating from the campus.
To CallManager, the multiple-site model with centralized call processing is nearly identical to the single-site model. However, guaranteeing voice quality between branch sites and the centralized site requires the use of a QoS policy that integrates the locations feature of CallManager.
Deploying a multiple-site model with centralized call processing offers easier administration and true feature transparency between the centralized and remote sites.
Because all sites are served by one cluster, you need to deploy only voice mail, conference bridges, and transcoders in the central site, and all remote sites can access these features. Figure 1-21 depicts the multiple-site model with centralized call processing.
Figure 1-21 Multiple-Site Model with Centralized Call Processing
If the IP WAN should fail, Cisco Survivable Remote Site Telephony (SRST) can ensure that the phones in remote sites can continue to place and receive calls to each other and to the PSTN. SRST is a feature of Cisco IOS that allows a Cisco router to act as a CallManager when the primary or secondary CallManager nodes are not reachable. SRST requires minimal configuration because it derives most of its settings from the CallManager database.
When the IP WAN is available, SRST acts as a data router for Cisco IP Phones (and outbound PSTN gateway for local calls) and simply ensures connectivity between the branch office devices and CallManager. If the IP WAN fails, however, SRST takes over control of the phones, allowing them to call each other and the PSTN. While phones are registered to SRST, they have access to a reduced feature set. When the IP WAN again becomes available, the phones reconnect to the CallManager cluster.
Table 1-6 shows the router platforms that support SRST and the maximum number of phones that each supports.
Table 1-6. Routers That Support SRST
Router |
Number of Phones Supported |
Cisco 1751-V Cisco 1760 Cisco 1760-V Cisco 2801 |
24 |
Cisco 2600XM Cisco 2811 |
36 |
Cisco 2650XM Cisco 2651XM Cisco 2821 |
48 |
Cisco 2691 Cisco 3640 Cisco 3640A |
72 |
Cisco 2851 |
96 |
Cisco 3725 |
144 |
Cisco 3660 |
240 |
Cisco 3825 |
336 |
Cisco 3745 Catalyst 6500 CMM |
480 |
Cisco 3845 |
720 |
Combined Multiple-Site Model
You can deploy the centralized and distributed models in tandem. If you have several large sites with a few smaller branch offices all connected by the IP WAN, for example, you can connect the large sites using a distributed model, while serving the smaller branch offices from one of your main campuses using the centralized model. This hybrid model relies on complementary use of the locations feature of CallManager and gatekeepers for call admission control. Figure 1-22 depicts the combined multiple-site model.
Figure 1-22 Combined Multiple-Site Model
Quality of Service (QoS)
Your network's available bandwidth ultimately determines the number of VoIP calls that your network can handle. As the amount of traffic on an IP network increases, individual data streams suffer packet loss and packet latency. In the case of voice traffic, this can mean clipped, choppy, and garbled voice. QoS mechanisms safeguard your network from such conditions.
Unlike data traffic, voice traffic can survive some loss of information. Humans are good at extracting information from an incomplete data stream, whereas computers are not. Data traffic, on the other hand, can deal with delayed transmission, whereas delayed transmission can destroy the intelligibility of a conversation. Traffic classification permits you to categorize your traffic into different types. Traffic classification is a prerequisite to traffic prioritization, the process of applying preferential treatment to certain types of traffic. Traffic prioritization allows you to minimize the latency that a voice connection experiences at the expense of the latency that a data connection experiences.
The design guide "Enterprise Quality of Service SRND" at http://www.cisco.com/go/srnd covers QoS in a Cisco IP Communications network in much greater detail than this section, which just provides an overview.
Call admission control (CAC) mechanisms prevent an IP network from becoming clogged with traffic to the point of being unusable. When a network's capacity is consumed, admissions control mechanisms prevent new traffic from being added to the network.
When calls traverse the WAN, admissions control assumes paramount importance. Within the LAN, on a switched network, life is good; if you classified your information properly, then either you have enough bandwidth or you do not. Links to remote sites across the IP WAN, however, can be a scarce resource. A 10-Mbps or 100-Mbps Ethernet connection can support hundreds of voice calls, but a 64-kbps ISDN link can route only a few calls before becoming overwhelmed.
This section describes the mechanisms that CallManager uses to enhance voice traffic on the network. It covers the following topics:
- "Traffic Marking" discusses traffic classification and traffic prioritization, features that enable you to give voice communications preferential treatment on your network.
- "Regions" discusses how you can conserve network bandwidth over bandwidth-starved IP WAN connections.
- "CallManager Locations" describes a method of call admissions control that functions within CallManager clusters.
- "H.323 Gatekeeper" describes a method of call admissions control that functions between CallManager clusters.
Traffic Marking
Traffic marking is important in configuring your VoIP network. By assigning voice traffic a routing priority higher than data traffic, you can ensure that latency-intolerant voice packets are passed through your IP fabric more readily than latency-tolerant data packets.
Routers that detect marked packets can place them in higher-priority queues for servicing before lower-priority packets. This strategy ensures that latency-sensitive voice and video traffic does not encounter undue delay between the endpoints. Marking voice and video streams at the highest priority helps ensure that users do not experience drops or delays in the end-to-end media stream. Marking call signaling higher than best-effort data helps ensure that users do not experience undue delay in receiving dial tone upon going off-hook.
CallManager supports two types of traffic marking. IP Precedence is the older type of traffic marking. In CallManager 4.0, a type of marking called Differentiated Services (or DiffServ), which is backward compatible with the older style of traffic marking, has essentially replaced it.
IP Precedence
The Cisco 79xx series phones (as well as the older Cisco 12SP+ and 30 VIP phones) all send out 802.1Q packets with the type of service field set to 5 for the voice stream and 3 for the signaling streams. CallManager permits you to set its type of service field to 3. In contrast, most data devices encode either no 802.1Q information or a default value of 0 for the type of service field.
When present, the type of service field permits the routers in your IP network to place incoming packets into processing queues according to the priority values encoded in the packet. By more quickly servicing queues into which higher-priority packets are placed, a router can guarantee that higher-priority packets experience less delay. Because all Cisco IP Phones encode their packets with a type of service value of 5 and data devices do not, in effect, the type of service and class of service fields permit you to classify the type of data passing through your network. This allows you to ensure that voice transmissions experience less latency. Figure 1-23 presents an example.
Figure 1-23 IP Precedence Example
Figure 1-23 depicts two devices that send information through a network router. The Cisco IP Phone 7960 categorizes its traffic with type of service 5, while the PC categorizes its traffic with type of service 0. The router reads packets from both devices from the network and places them in queues based on the type of service field. Packets classified with type of service 5 go on a priority queue; other packets go on the default queue.
When the router decides to forward the packet out to the network again, it sends packets from the priority queue in preference to those on the default queue. Therefore, even if the Cisco IP Phone 7960 and PC send their packets to the router at the same time, the router forwards all of the packets sent by the IP Phone before forwarding any of the packets from the PC. This minimizes the latency (or end-to-end trip time) required for packets from the IP Phone, but increases the latency experienced by the PC. Thus, the router properly handles the latency-intolerant voice packets.
Differentiated Services
Differentiated Services (or DiffServ) is a traffic classification method that has essentially superseded the older IP Precedence traffic classification method. It permits a finer granularity classification than IP Precedence.
When a particular packet is marked, what is actually occurring is that a field in the IP packet header is being tagged with a particular value. The older IP Precedence field is 3 bits long, which permits IP Precedence values ranging from 0 to 7. The newer Differentiated Service Code Point (DSCP) values are 6 bits long, but they use, as the high-order bits of the DSCP, the original 3 bits set aside for the Type of Service field in the IP header. Therefore, routers that do not pay attention to the newer method of traffic classification can still provide voice and video traffic preferential treatment, because the high-order bits of this DSCP-marked traffic roughly correspond with the older IP Precedence values.
Table 1-7. Comparison Between Traffic Classification Values
3 High-Order Bits |
IP Precedence |
CallManager-Settable DSCP Values |
Comment |
000 |
Default |
Default |
Best-effort traffic |
001 |
IP Prec 1 |
CS1 (001000) AF11 (001010) AF12 (001100) AF13 (001110) |
|
010 |
IP Prec 2 |
CS2 (010000) AF21 (010010) AF22 (010100) AF23 (010110) |
|
011 |
IP Prec 3 |
CS3 (011000) AF31 (011010) AF32 (011100) AF33 (011110) |
Recommended call signaling |
100 |
IP Prec 4 |
CS4 (100000) AF41 (100010) AF42 (100100) AF43 (100110) |
Recommended video |
101 |
IP Prec 5 |
EF (101110) |
Recommended voice |
110 |
IP Prec 6 |
Reserved |
|
111 |
IP Prec 7 |
Reserved |