Design
With the different deployment models available and the various options about the distribution of software components based on the size of the contact center, you can design a UCCE solution with many permutations. However, you needd to consider several key points when creating a design:
- Scalability: Is the solution sized correctly for the number of staff that will use the platform initially and for at least 1 year following going live?
- Survivability: Does the proposed solution take advantage of the multiple levels of redundancy available with UCCE?
- Compliance: Does the design meet or exceed the customer's business requirements? Are the hardware and software compatible with the bill of materials, the Solution Reference Network Design (SRND) Guide for UCCE, and the Cisco Assessment to Quality (A2Q) process?
Software Versions
When creating a UCCE design, one of the initial considerations is which version of UCCE should be used. It would be easy to insist that the customer deploys the latest version of UCCE. However, many customers prefer to wait for at least the first maintenance release of a product to become available. It is also wrong to select a software version in isolation. A key feature of unified communications is the capability to integrate with many different products or platforms. It is therefore important to consider the following before making a decision on which version to choose:
- Does the chosen software version have the required features? Most new features are introduced in the major software versions; however, some features do become available in the minor releases.
- Is the software version first customer shipment (FCS), or has it been available for some time and is known to be a stable, or preferred, release? Are several maintenance releases available that have fixed the majority of known bugs? Good practice is to check the release notes for any known outstanding bugs. Many of these bugs are minor, but is there anything in particular that can affect this specific customer?
- Are specific IOS versions required on the network infrastructure, in particular, voice gateways, CUBE routers and analog interfaces?
- Are there any specific IOS features that need to be enabled on the network, including QoS, firewall ports, and traffic engineering?
- Does a network management requirement exist to ensure that operational support personnel are aware of issues in real time?
- Does the organization already have a Cisco Unified CM platform? Is this software version compatible, or is an upgrade required?
- Which other Cisco Unified Communications products are proposed or are already in use? This includes IVR, voicemail, and Unified Communicator.
- Does the organization require the integration of any third-party products such as voice recording, wallboards, customer relationship management (CRM) integration, attendant consoles, and instant messaging?
- Is any legacy TDM integration required?
As detailed in Chapter 3, "Deployment Models," Cisco produces a compatibility matrix to assist when choosing software versions. Two versions of the matrix are available: a generic Unified Communications matrix that details product compatibility with Unified CM and a contact center—specific matrix for products including Customer Voice Portal (CVP) and IP IVR.
Cisco also regularly performs an IP communications systems test. This is a standard methodology for Cisco to perform systemwide testing of all Unified Communications products in a single laboratory environment. A major deliverable of this testing is a recommendation of compatible software releases that have been verified during this testing process. Organizations that plan to deploy multiple voice applications and infrastructure products can adopt these recommendations in their design.
Platform Sizing
When designing a Cisco Unified Communications solution, an essential application for the solutions architect to work through is the Unified Communications Sizing Tool (see Figure 6-2). This tool enables the design team to step through a comprehensive set of questions to specify the majority of components (CVP, IP IVR, Expert Advisor, Agent Desktop software) that make up the UC solution. As well as detailing the software components, the tool enables the design team to enter details including call-handling parameters, software versions, anticipated service levels, and redundancy options.
Figure 6-2 Unified Communications Sizing Tool
The tool prompts the user with a large number of design questions and can take considerable effort to complete correctly. The resulting output of completing the sizing tool is an Adobe PDF document detailing design options and performance metrics based on the data entered. This document should also be submitted to the A2Q process.
The sizing tool can also be used for designing expansions to existing UCCE platforms.
Platform Redundancy
You can design redundancy into a UCCE solution in many different ways. The recommended methods are as follows:
- Hardware redundancy within a component: Examples of this include multiple disks within each server using a RAID mechanism or dual power supplies.
- Node redundancy: This involves distributing the different nodes (loggers, routers, PGs) over multiple servers.
- Distributed architecture: The UCCE software architecture is naturally redundant through its use of a Side A and Side B. It is common to separate the core platform over two physically diverse locations. The LAN and WAN connections are also diversely routed to separate the UCCE private and public network traffic.
Server Naming Conventions
Early versions of UICM did not have many of the supportability tools available today. When enabling trace settings, collecting logs, and performing administration duties, the platform required the support engineer to frequently establish remote control sessions to the required servers. To make it easier to identify the role of server, it felt that a standardized naming convention should be used. The commonly used server naming convention combined the customer instance name and the servers role. This naming convention was not mandatory, but many enterprises chose to implement it.
The naming convention used the format of three concatenated acronyms, GEOXXXYYYY, which are detailed in Table 6-1.
Table 6-1. Server Naming Convention
Acronym |
Description |
GEO |
An abbreviation of GeoTel, the original developers of the UICM platform. |
XXXX |
An abbreviation of the customer name, usually the customer instance name. |
YYYY |
An acronym of the node type, as follows:
|
Table 6-2 provides an example of the server naming convention, with GEO being substituted for CSO (Cisco) and using a customer instance name of cus01.
Table 6-2. Example Server Naming for Customer Instance cus01
Server Name |
Server Type |
Node |
Description |
csocus01lgra |
Logger |
A |
Logger Side A |
csocus01lgrb |
Logger |
B |
Logger Side B |
csocus01rtra |
Router |
A |
Router Side A |
csocus01rtrb |
Router |
B |
Router Side B |
csocus01pg1a |
PG |
1A |
Peripheral Gateway 1 Side A |
csocus01pg1b |
PG |
1B |
Peripheral Gateway 1 Side B |
csocus01aw1 |
AW |
1 |
Administrative Workstation 1 |
csocus01hds1 |
HDS |
1 |
Historical Data Server 1 |
Unfortunately, the naming convention has not been continued in the later versions of the software, so standard acronyms for newer components such as the support tools server do not exist.
Deployment Spreadsheet
Solution design documents tend to focus on the architecture and the services that will be provided by the end solution, but they should also include the configuration settings that will be used. As UCCE is a distributed application that requires installation on several servers, the software setup process will be run many times, and potentially by different engineers, especially if the solution is distributed over several geographic locations.
Before deployment commences, it is advisable to have a node deployment spreadsheet that has been created by the solutions architect and reviewed and approved by the installation engineers.
The deployment spreadsheet is a quick reference guide with configuration details taken from the solution design that covers all the application settings to be used by the installation engineers during UCCE installation.
For simplicity, it is often easiest to represent the data in tabular form within a spreadsheet. Each sheet represents a single UCCE node and the settings required for the entire base software installation process.
Taking a logger installation as an example, the LoggerA sheet would require the configuration settings detailed in Table 6-3.
Table 6-3. LoggerA Installation Settings
Application |
Setting |
SQL Server |
SQL Server version SQL Server settings (database and log locations, sort order, service startup accounts) SQL Server service pack version |
UCCE Installation |
Location and version of maintenance release (if required) Drive destination for installation files Install OS security hardening? Install SQL Server security hardening? A preshared key for Support Tools communication |
UCCE Domain Manager (assuming LoggerA is the first UCCE node) |
Customer instance name Domain name Customer instance number Facility name Domain usernames to assign to UCCE (Config, Setup, and WebView groups) |
UCCE Web Setup |
Administration username and password (that are assigned to the UCCE setup group) NOTE: The italicized comments that follow are example answers for a LoggerA setup. Deployment type [Enterprise] Side [A] Fault tolerance mode [Duplexed] Router Side A private interface [csocus01rtrap] Router Side B private interface [csocus01rtrbp] Logger Side A private interface [csocus01lgrap] Logger Side B private interface [csocus01lgrbp] Enable historical/detail data replication [Y] Display database purge configuration steps [N] Enable outbound option [N] Reboot on error [N] Reboot on request [N] Do not modify service account [selected] Stop and then start the logger [N] |
This same method would need to be applied to all the UCCE components, including routers, peripheral gateways (PG), support tools server, and IVRs to provide a comprehensive deployment spreadsheet. It is easy to see that creating these spreadsheets for an entire deployment can be time consuming, but doing so ensures that consistency is attained throughout the installation.
It is also advisable for the sheet to have a signoff section so that the installation can append a date and time that the configuration took place. This is useful for establishing a timeline in case retrospective changes need to be applied in the future. It also helps the project manager determine the project's progress.
Network Services
Network services are the foundation protocols and applications that run on the network to provide functionality to higher-layer applications and products. UCCE relies on several underlying network services.
DNS and HOSTS
The Domain Name System (DNS) service and HOSTS files provide device hostnames to IP address resolution. The communication between UCCE nodes is through IP. When configuring the various UCCE components, hostnames are generally used to provide support engineers with human-readable server names to make troubleshooting easier. For IP communication to take place between two servers, the server names need to be resolved to an IP address.
DNS services tend to be reliable, but the loss of the DNS service could have major consequences on the reliability of the UCCE platform. It is common practice to create HOSTS files that are a static list of server hostnames and their associated IP addresses. The HOSTS files usually list both the public and private addresses of only the UCCE server interfaces and are copied to each UCCE server. The servers also have details of the DNS servers configured on the public network interface controller (NIC) so that the UCCE server has access to other non-UCCE servers and services.
A disadvantage of using a HOSTS file is that the HOSTS file needs to be manually maintained and distributed throughout the necessary servers if any UCCE IP address or server names are changed, including the addition of new servers, such as PGs. Fortunately, the core servers that comprise a UCCE solution infrequently change.
Quality of Service
Network quality of service (QoS) is the ability to provide different priorities to streams of IP traffic for different applications, users, or data flows. QoS can be implemented in different ways. Usually the traffic is classified and marked on entry to the network. It is then prioritized as it traverses the network through the use of queuing or reservation algorithms. Both LAN and WAN traffic can be prioritized.
Although the core UCCE servers do not actually touch any voice streams, an entire UCCE solution comprises three different types of traffic:
- Data traffic consists of general traffic between clients and servers, such as the communication or heartbeat traffic between the central controllers, Computer Telephony Integration (CTI) data sent to an agent desktop application, or database replication traffic.
- Voice traffic consists of Real-Time Transport Protocol (RTP) packets that actually contain the packetized voice streams between IP phones, gateways, or IVRs.
- Call control traffic consists of the protocols used to perform control functions such as the communication between a Unified CM subscriber and an IP IVR, or the SCCP control messages sent when an IP phone goes off-hook.
To understand QoS for UCCE, it is also necessary to understand the two independent communications networks used in a UCCE deployment.
Figure 6-3 shows the distributed components of a standard UCCE deployment. The central controllers and PGs each have two or more NICs. One NIC is connected to the public network (sometimes called the visible network), and the other is connected to the private network. The private network carries synchronization and heartbeat traffic. The public network carries all other traffic. Components such as the Unified CM servers and admin workstation historical data services (AW HDS) do not need a connection to the UCCE private network.
Figure 6-3 UCCE Independent Communications Networks
It is important to understand that the PG private network is different from the central controller's private network. The private interfaces for the PGs do not need to communicate with the private interfaces for the central controllers. This, however, does not mean that two separate private networks are required for architectures such as clustering over the WAN. In many cases, the two private networks are combined, but just sized correctly to support the necessary bandwidth requirements. For distributed deployments with both halves of the central controllers located on different physical sites, the private network does need to be physically separate from the public network. This often results in a private point-to-point link being installed purely for the private traffic. This diverse network is required so that two network connections exist between each side of the central controllers to ensure that the central controllers can still communicate and process calls in the event that one network connection fails. If both network connections fail, Side A or B attempts to continue based on an algorithm determined by the side of the router and the number of PGs with which the router is in communication. The requirement for the private link is often the most discussed point during the A2Q process!
For many smaller single-site deployments that combine the logger and router onto the same server, the private network is achieved by using a crossover cable between both machines.
The UCCE solution supports QoS tagging in the application. This means that the UCCE component marks the traffic with the assigned QoS classification as it leaves the application and therefore requires no further marking in the network. Assuming that the network trusts the QoS tagging, the QoS marking will be retained until it reaches its destination.
In UCCE version 8.5, QoS tagging is currently supported only for the private networks of the router and PG processes. Figure 6-4 shows an example of the default QoS tagging for Router A defined during web setup.
Figure 6-4 Router A QoS Marking for the Private Network
Prior to applications-based QoS marking (and for networks capable of only marking the packets at the network edge), UCCE uses additional IP addresses assigned to the public and private NICs. UCCE applications now actually use three traffic priorities—low, medium, and high—but prior to this, only two priorities were used. These priorities were based on the source and destination IP addresses. This allowed the marking and routing of traffic within the network rather than at the application. The traffic from these IP addresses also used specific TCP/UDP ports based on an algorithm that incorporates the instance number. This was so that hosted systems with multiple customer instances would not clash.
The IP addresses for the different interfaces (public/visible or private) and their priority (normal or high) were allocated to hostnames for easy configuration within the application. These names would also be detailed in the HOSTS file. Figure 6-5 shows a screen shot for a router setup that uses the visible, private, and high-priority private (termed private high) host names. Notice the p and ph at the end of the hostname to signify private and private high, respectively.
Figure 6-5 Router A Hostnames for the Private and Public Networks
Databases
An important aspect of designing a UCCE solution is to ensure that the databases are sized correctly. Database sizing has a direct impact on the amount of data that can be retained in all the databases.
The logger database is required to store all the configuration data and a limited amount of historical data. The HDS database stores all the long-term historical reporting data and call detail records. Many organizations want to retain at least 3 years worth of historical data so that they can analyze call trends over a long time period. Both the logger and the HDS databases are created by the installation engineer using the ICM Database Administrator (ICMDBA) tool, as shown in Figure 6-6.
Figure 6-6 The ICMDBA Tool Used for Creating the Logger and HDS Databases
The Administration and Data Server also has a configuration and real-time database, but this is not created with ICMDBA during installation; instead, the installer program automatically creates this database.
The server specification detailed in the UCCE bill of materials document typically specifies disk sizes that can easily meet the requirements of all but the largest contact centers. However, it is the responsibility of the designer and installation engineer to correctly size the database during installation. Unfortunately, the UCCE Sizing Tool does not provide any guideline sizes. You can use an ICM System Sizing Estimator tool to give database sizing approximations.
Figure 6-7 shows a screen shot of the Sizing Estimator running on an Administration and Data Server.
Figure 6-7 ICM System Sizing Estimator
The tool works by allowing the designer to enter approximate figures for the configuration data and the required retention periods. The tool then indicates an estimated required database size. As with all estimations, it is good practice to factor in an amount of anticipated growth.
Cisco A2Q Process
An important part of the design and ordering process is for the high-level solution to be approved by Cisco. Previously called Bid Assurance, the Assessment to Quality (A2Q) is a high-level design review of the proposed solution that takes place between the Cisco ATP partner responsible for deployment and the Cisco A2Q review team.
The A2Q process usually consists of the ATP partner submitting a series of documents to the review team. As the process is only a high-level review, the documents submitted include the following:
- The A2Q questionnaire, which comprises a series of questions to give the review team the necessary background and design overview.
- The bill of materials (BOM), which forms the kit list of the actual components to be ordered from Cisco to build the UCCE platform.
- A network design showing all the network connectivity, physical site locations, bandwidth, and QoS.
- A statement of work (SOW), which explains the deployment process and the teams, partners, and possibly Cisco professional service personal who can perform the deployment.
After the documents have been submitted, the review team schedules a conference call with the Cisco partner to discuss the design. The call typically lasts for approximately 30 minutes, and the conference participants include the following:
- Representatives from the ATP partner including the technical project manager and solutions architect responsible for the design.
- The A2Q review team composed of several Cisco engineers from the Contact Center Business Unit (CCBU) familiar with all technologies relevant to the design, including Unified Communications Manager, IVRs, and network infrastructure.
- The Cisco sales engineer assigned to work with the ATP partner, who typically has a detailed knowledge of the partner's capabilities and the customer's requirements.
The A2Q conference calls are not open to the end customer.
During the conference call, the review team works through the submitted documents to confirm and validate various design items and the rationale behind them. The review team also seeks clarification on who will actually perform the deployment and the methodology that will be followed. Although the ATP partner is the direct interface into Cisco, it is common for large partners to subcontract work to smaller companies. The accountability of the solution is that of the ATP partner and not the subcontractor.
The A2Q process can take place at any point during the sales cycle. Cisco recommends that A2Q takes place as early as possible because the process is not just a formality to receive approval. A2Q is an important step in the deployment process, and without A2Q approval, the ATP partner cannot purchase the UCCE software and licenses required for deployment.
As discussed, the A2Q process is a high-level design review performed during a brief conference call. A detailed design review could take days or weeks. Items discussed on the call typically involve the following:
- That the solution is sized correctly to meet the requirements, such as the number of Cisco Unified Communications Manager (Unified CM) servers to support a given number of agents, or the number of IVR ports to provide IVR resources for a defined number of busy-hour call attempts (BHCA).
- That the correct kit list is ordered. Often the wrong part codes are listed in the bill of materials, or some items are accidentally missed.
- That adequate and skilled resourcing has been scheduled to perform the deployment and that realistic time frames have been proposed.
The review team takes part in a large number of design reviews on a regular basis, so they have familiarity with a range of UCCE deployments. Because they see so many proposed designs, they can also offer comprehensive feedback and advice to the ATP partner on potential enhancements and ways to better the solution, perhaps even reducing the overall cost of deployment.
From the A2Q reviews I have taken part in, I would recommend the following points:
- Clearly document and detail the private and public network connectivity. The survivability of the UCCE platform during a failure depends on the correct deployment of a segregated private and public network. With complex deployments such as clustering over the WAN and split peripheral gateways, it is important to demonstrate to the review team that the platform can still transmit heartbeat and synchronization traffic during events of failure.
- Check the kit list thoroughly. Often the bill of materials (BOM) is produced early in the design process to provide the sales team with a figure to quote to the end customer. During the design process, the BOMs might change to suit new requirements. Be sure to include the correct agent license part codes, media kits, and the required support products such as Essential Operate Service (ESW) and the Unified Communications Software Subscription (UCSS).
- Ensure that the proposed solution is sized correctly to meet just slightly more than the minimum requirements, but is not overengineered. Many of the customers I have worked with have expanded their platforms over time to add more agents. Designing a solution to meet only the bare minimum is a false economy and can lead to performance issues.
The A2Q process is not a guarantee that the solution will be error-free when it is deployed, but it is a review that adds great value to all designs and has been proven to ensure that the end customer receives a solution that is fit for purpose.