Enterprise Campus Design
The next subsections detail key enterprise campus design concepts. The access, distribution, and core layers introduced earlier in this chapter are expanded on with applied examples. Later subsections of this chapter define a model for implementing and operating a network.
The tasks of implementing and operating a network are two components of the Cisco Lifecycle model. In this model, the life of the network and its components are taught with a structural angle, starting from the preparation of the network design to the optimization of the implemented network. This structured approach is key to ensure that the network always meets the requirements of the end users. This section describes the Cisco Lifecycle approach and its impact on network implementation.
The enterprise campus architecture can be applied at the campus scale, or at the building scale, to allow flexibility in network design and facilitate ease of implementation and troubleshooting. When applied to a building, the Cisco Campus Architecture naturally divides networks into the building access, building distribution, and building core layers, as follows:
- Building access layer: This layer is used to grant user access to network devices. In a network campus, the building access layer generally incorporates switched LAN devices with ports that provide connectivity to workstations and servers. In the WAN environment, the building access layer at remote sites can provide access to the corporate network across WAN technology.
- Building distribution layer: Aggregates the wiring closets and uses switches to segment workgroups and isolate network problems.
- Building core layer: Also known as the campus backbone, this is a high-speed backbone designed to switch packets as fast as possible. Because the core is critical for connectivity, it must provide a high level of availability and adapt to changes quickly.
Figure 1-12 illustrates a sample enterprise network topology that spans multiple buildings.
Figure 1-12 Enterprise Network with Applied Hierarchical Design
The enterprise campus architecture divides the enterprise network into physical, logical, and functional areas. These areas enable network designers and engineers to associate specific network functionality on equipment based upon its placement and function in the model.
Access Layer In-Depth
The building access layer aggregates end users and provides uplinks to the distribution layer. With the proper use of Cisco switches, the access layer may contain the following benefits:
- High availability: The access layer is supported by many hardware and software features. System-level redundancy using redundant supervisor engines and redundant power supplies for critical user groups is an available option within the Cisco switch portfolio. Moreover, additional software features of Cisco switches offer access to default gateway redundancy using dual connections from access switches to redundant distribution layer switches that use first-hop redundancy protocols (FHRP) such as the hot standby routing protocol (HSRP). Of note, FHRP and HSRP features are supported only on Layer 3 switches; Layer 2 switches do not participate in HSRP and FHRP and forwarding respective frames.
- Convergence: Cisco switches deployed in an access layer optionally support inline Power over Ethernet (PoE) for IP telephony and wireless access points, enabling customers to converge voice onto their data network and providing roaming WLAN access for users.
- Security: Cisco switches used in an access layer optionally provide services for additional security against unauthorized access to the network through the use of tools such as port security, DHCP snooping, Dynamic Address Resolution Protocol (ARP) Inspection, and IP Source Guard. These features are discussed in later chapters of this book.
Figure 1-13 illustrates the use of access layer deploying redundant upstream connections to the distribution layer.
Figure 1-13 Access Layer Depicting Two Upstream Connections
Distribution Layer
Availability, fast path recovery, load balancing, and QoS are the important considerations at the distribution layer. High availability is typically provided through dual paths from the distribution layer to the core, and from the access layer to the distribution layer. Layer 3 equal-cost load sharing enables both uplinks from the distribution to the core layer to be utilized.
The distribution layer is the place where routing and packet manipulation are performed and can be a routing boundary between the access and core layers. The distribution layer represents a redistribution point between routing domains or the demarcation between static and dynamic routing protocols. The distribution layer performs tasks such as controlled-routing decision making and filtering to implement policy-based connectivity and QoS. To improve routing protocol performance further, the distribution layer summarizes routes from the access layer. For some networks, the distribution layer offers a default route to access layer routers and runs dynamic routing protocols when communicating with core routers.
The distribution layer uses a combination of Layer 2 and multilayer switching to segment workgroups and isolate network problems, preventing them from affecting the core layer. The distribution layer is commonly used to terminate VLANs from access layer switches. The distribution layer connects network services to the access layer and implements policies for QoS, security, traffic loading, and routing. The distribution layer provides default gateway redundancy by using an FHRP such as HSRP, Gateway Load Balancing Protocol (GLBP), or Virtual Router Redundancy Protocol (VRRP) to allow for the failure or removal of one of the distribution nodes without affecting endpoint connectivity to the default gateway.
In review, the distribution layer provides the following enhancements to the campus network design:
- Aggregates access layer switches
- Segments the access layer for simplicity
- Summarizes routing to access layer
- Always dual-connected to upstream core layer
- Optionally applies packet filtering, security features, and QoS features
Figure 1-14 illustrates the distribution layer interconnecting several access layer switches.
Figure 1-14 Distribution Layer Interconnecting the Access Layer
Core Layer
The core layer is the backbone for campus connectivity and is the aggregation point for the other layers and modules in the enterprise network. The core must provide a high level of redundancy and adapt to changes quickly. Core devices are most reliable when they can accommodate failures by rerouting traffic and can respond quickly to changes in the network topology. The core devices must be able to implement scalable protocols and technologies, alternative paths, and load balancing. The core layer helps in scalability during future growth.
The core should be a high-speed, Layer 3 switching environment utilizing hardware-accelerated services in terms of 10 Gigabit Ethernet. For fast convergence around a link or node failure, the core uses redundant point-to-point Layer 3 interconnections in the core because this design yields the fastest and most deterministic convergence results. The core layer should not perform any packet manipulation in software, such as checking access-lists and filtering, which would slow down the switching of packets. Catalyst and Nexus switches support access lists and filtering without effecting switching performance by supporting these features in the hardware switch path.
Figure 1-15 depicts the core layer aggregating multiple distribution layer switches and subsequently access layer switches.
Figure 1-15 Core Layer Aggregating Distribution and Access Layers
In review, the core layer provides the following functions to the campus and enterprise network:
- Aggregates multiple distribution switches in the distribution layer with the remainder of the enterprise network
- Provides the aggregation points with redundancy through fast convergence and high availability
- Designed to scale as the distribution and consequently the access layer scale with future growth
The Need for a Core Layer
Without a core layer, the distribution layer switches need to be fully meshed. This design is difficult to scale and increases the cabling requirements because each new building distribution switch needs full-mesh connectivity to all the distribution switches. This full-mesh connectivity requires a significant amount of cabling for each distribution switch. The routing complexity of a full-mesh design also increases as you add new neighbors.
In Figure 1-16, the distribution module in the second building of two interconnected switches requires four additional links for full-mesh connectivity to the first module. A third distribution module to support the third building would require eight additional links to support connections to all the distribution switches, or a total of 12 links. A fourth module supporting the fourth building would require 12 new links for a total of 24 links between the distribution switches. Four distribution modules impose eight interior gateway protocol (IGP) neighbors on each distribution switch.
Figure 1-16 Scaling Without Distribution Layer
As a recommended practice, deploy a dedicated campus core layer to connect three or more physical segments, such as building in the enterprise campus or four or more pairs of building distribution switches in a large campus. The campus core helps make scaling the network easier when using Cisco switches with the following properties:
- 10-Gigabit and 1-Gigabit density to scale
- Seamless data, voice, and video integration
- LAN convergence optionally with additional WAN and MAN convergence
Campus Core Layer as the Enterprise Network Backbone
The core layer is the backbone for campus connectivity and optionally the aggregation point for the other layers and modules in the enterprise campus architecture. The core provides a high level of redundancy and can adapt to changes quickly. Core devices are most reliable when they can accommodate failures by rerouting traffic and can respond quickly to changes in the network topology. The core devices implement scalable protocols and technologies, alternative paths, and load balancing. The core layer helps in scalability during future growth. The core layer simplifies the organization of network device interconnections. This simplification also reduces the complexity of routing between physical segments such as floors and between buildings.
Figure 1-17 illustrates the core layer as a backbone interconnecting the data center and Internet edge portions of the enterprise network. Beyond its logical position in the enterprise network architecture, the core layer constituents and functions depend on the size and type of the network. Not all campus implementations require a campus core. Optionally, campus designs can combine the core and distribution layer functions at the distribution layer for a smaller topology. The next section discusses one such example.
Figure 1-17 Core Layer as Interconnect for Other Modules of Enterprise Network
Small Campus Network Example
A small campus network or large branch network is defined as a network of fewer than 200 end devices, whereas the network servers and workstations might be physically connected to the same wiring closet. Switches in small campus network design might not require high-end switching performance or future scaling capability.
In many cases with a network of less than 200 end devices, the core and distribution layers can be combined into a single layer. This design limits scale to a few access layer switches for cost purposes. Low-end multilayer switches such as the Cisco Catalyst 3560E optionally provide routing services closer to the end user when there are multiple VLANs. For a small office, one low-end multilayer switch such as the Cisco Catalyst 2960G might support the Layer 2 LAN access requirements for the entire office, whereas a router such as the Cisco 1900 or 2900 might interconnect the office to the branch/WAN portion of a larger enterprise network.
Figure 1-17 depicts a sample small campus network with campus backbone that interconnects the data center. In this example, the backbone could be deployed with Catalyst 3560E switches, and the access layer and data center could utilize the Catalyst 2960G switches with limited future scalability and limited high availability.
Medium Campus Network Example
For a medium-sized campus with 200 to 1000 end devices, the network infrastructure is typically using access layer switches with uplinks to the distribution multilayer switches that can support the performance requirements of a medium-sized campus network. If redundancy is required, you can attach redundant multilayer switches to the building access switches to provide full link redundancy. In the medium-sized campus network, it is best practice to use at least a Catalyst 4500 series or Catalyst 6500 family of switches because they offer high availability, security, and performance characteristics not found in the Catalyst 3000 and 2000 family of switches.
Figure 1-18 shows a sample medium campus network topology. The example depicts physical distribution segments as buildings. However, physical distribution segments might be floors, racks, and so on.
Figure 1-18 Sample Medium Campus Network Topology
Large Campus Network Design
Large campus networks are any installation of more than 2000 end users. Because there is no upper bound to the size of a large campus, the design might incorporate many scaling technologies throughout the enterprise. Specifically, in the campus network, the designs generally adhere to the access, distribution, and core layers discussed in earlier sections. Figure 1-17 illustrates a sample large campus network scaled for size in this publication.
Large campus networks strictly follow Cisco best practices for design. The best practices listed in this chapter, such as following the hierarchical model, deploying Layer 3 switches, and utilizing the Catalyst 6500 and Nexus 7000 switches in the design, scratch only the surface of features required to support such a scale. Many of these features are still used in small and medium-sized campus networks but not to the scale of large campus networks.
Moreover, because large campus networks require more persons to design, implement, and maintain the environment, the distribution of work is generally segmented. The sections of the enterprise network previously mentioned in this chapter, campus, data center, branch/WAN and Internet edge, are the first-level division of work among network engineers in large campus networks. Later chapters discuss many of the features that might be optionally for smaller campuses that become requirements for larger networks. In addition, large campus networks require a sound design and implementation plans. Design and implementation plans are discussed in upcoming sections of this chapter.
Data Center Infrastructure
The data center design as part of the enterprise network is based on a layered approach to improve scalability, performance, flexibility, resiliency, and maintenance. There are three layers of the data center design:
- Core layer: Provides a high-speed packet switching backplane for all flows going in and out of the data center.
- Aggregation layer: Provides important functions, such as service module integration, Layer 2 domain definitions, spanning tree processing, and default gateway redundancy.
- Access layer: Connects servers physically to the network.
Multitier HTTP-based applications supporting web, application, and database tiers of servers dominate the multitier data center model. The access layer network infrastructure can support both Layer 2 and Layer 3 topologies, and Layer 2 adjacency requirements fulfilling the various server broadcast domain or administrative requirements. Layer 2 in the access layer is more prevalent in the data center because some applications support low-latency via Layer 2 domains. Most servers in the data center consist of single and dual attached one rack unit (RU) servers, blade servers with integrated switches, blade servers with pass-through cabling, clustered servers, and mainframes with a mix of oversubscription requirements. Figure 1-19 illustrates a sample data center topology at a high level.
Figure 1-19 Data Center Topology
Multiple aggregation modules in the aggregation layer support connectivity scaling from the access layer. The aggregation layer supports integrated service modules providing services such as security, load balancing, content switching, firewall, SSL offload, intrusion detection, and network analysis.
As previously noted, this book focuses on the campus network design of the enterprise network exclusive to data center design. However, most of the topics present in this text overlap with topics applicable to data center design, such as the use of VLANs. Data center designs differ in approach and requirements. For the purpose of CCNP SWITCH, focus primarily on campus network design concepts.
The next section discusses a lifecycle approach to network design. This section does not cover specific campus or switching technologies but rather a best-practice approach to design. Some readers might opt to skip this section because of its lack of technical content; however, it is an important section for CCNP SWITCH and practical deployments.