This chapter reviews the hierarchical network model and introduces Cisco’s Enterprise Architecture model. This architecture model separates network design into more manageable modules. This chapter also addresses the use of device, media, and route redundancy to improve network availability.
“Do I Know This Already?” Quiz
The “Do I Know This Already?” quiz helps you identify your strengths and deficiencies in this chapter’s topics.
The eight-question quiz, derived from the major sections in the “Foundation Topics” portion of the chapter, helps you determine how to spend your limited study time.
Table 2-1 outlines the major topics discussed in this chapter and the “Do I Know This Already?” quiz questions that correspond to those topics.
Figure 2-1 Hierarchical network design has three layers: core, distribution, and access
Each layer provides necessary functionality to the enterprise campus network. You do not need to implement the layers as distinct physical entities. You can implement each layer in one or more devices or as cooperating interface components sharing a common chassis. Smaller networks can “collapse” multiple layers to a single device with only an implied hierarchy. Maintaining an explicit awareness of hierarchy is useful as the network grows.
Core Layer
The core layer is the network’s high-speed switching backbone that is crucial to corporate communications. It is also referred as the backbone. The core layer should have the following characteristics:
Fast transport
High reliability
Redundancy
Fault tolerance
Low latency and good manageability
Avoidance of CPU-intensive packet manipulation caused by security, inspection, quality of service (QoS) classification, or other processes
Limited and consistent diameter
QoS
When a network uses routers, the number of router hops from edge to edge is called the diameter. As noted, it is considered good practice to design for a consistent diameter within a hierarchical network. The trip from any end station to another end station across the backbone should have the same number of hops. The distance from any end station to a server on the backbone should also be consistent.
Limiting the internetwork’s diameter provides predictable performance and ease of troubleshooting. You can add distribution layer routers and client LANs to the hierarchical model without increasing the core layer’s diameter. Use of a block implementation isolates existing end stations from most effects of network growth.
Distribution Layer
The network’s distribution layer is the isolation point between the network’s access and core layers. The distribution layer can have many roles, including implementing the following functions:
Policy-based connectivity (for example, ensuring that traffic sent from a particular network is forwarded out one interface while all other traffic is forwarded out another interface)
Redundancy and load balancing
Aggregation of LAN wiring closets
Aggregation of WAN connections
QoS
Security filtering
Address or area aggregation or summarization
Departmental or workgroup access
Broadcast or multicast domain definition
Routing between virtual LANs (VLANs)
Media translations (for example, between Ethernet and Token Ring)
Redistribution between routing domains (for example, between two different routing protocols)
Demarcation between static and dynamic routing protocols
You can use several Cisco IOS Software features to implement policy at the distribution layer:
Filtering by source or destination address
Filtering on input or output ports
Hiding internal network numbers by route filtering
Static routing
QoS mechanisms, such as priority-based queuing
The distribution layer provides aggregation of routes providing route summarization to the core. In the campus LANs, the distribution layer provides routing between VLANs that also apply security and QoS policies.
Access Layer
The access layer provides user access to local segments on the network. The access layer is characterized by switched LAN segments in a campus environment. Microsegmentation using LAN switches provides high bandwidth to workgroups by reducing the number of devices on Ethernet segments. Functions of the access layer include the following:
Layer 2 switching
High availability
Port security
Broadcast suppression
QoS classification and marking and trust boundaries
Rate limiting/policing
Address Resolution Protocol (ARP) inspection
Virtual access control lists (VACLs)
Spanning tree
Trust classification
Power over Ethernet (PoE) and auxiliary VLANs for VoIP
Network Access Control (NAC)
Auxiliary VLANs
You implement high availability models at the access layer. The section “High Availability Network Services” covers availability models. The LAN switch in the access layer can control access to the port and limit the rate at which traffic is sent to and from the port. You can implement access by identifying the MAC address using ARP, trusting the host, and using access lists.
Other chapters of this book cover the other functions in the list.
For small office/home office (SOHO) environments, the entire hierarchy collapses to interfaces on a single device. Remote access to the central corporate network is through traditional WAN technologies such as ISDN, Frame Relay, and leased lines. You can implement features such as dial-on-demand routing (DDR) and static routing to control costs. Remote access can include virtual private network (VPN) technology.
Table 2-2 summarizes the hierarchical layers.
Figure 2-2 Switched Hierarchical Design
Figure 2-3 shows examples of a routed hierarchical design. In this design, the Layer 3 boundary is pushed toward the access layer. Layer 3 switching occurs in access, distribution, and core layers. Route filtering is configured on interfaces toward the access layer. Route summarization is configured on interfaces toward the core layer. The benefit of this design is that load balancing occurs from the access layer since the links to the distribution switches are routed.
Figure 2-3 Routed hierarchical design
Another solution for providing redundancy between the access and distribution switching is the Virtual Switching System (VSS). VSS solves the STP looping problem by converting the distribution switching pair into a logical single switch. It removes STP and negates the need for Hot Standby Router Protocol (HSRP), Virtual Router Redundancy Protocol (VRRP), or Gateway Load Balancing Protocol (GLBP).
With VSS, the physical topology changes as each access switch has a single upstream distribution switch versus having two upstream distribution switches. VSS is configured only on Cisco 6500 switches using the VSS Supervisor 720-10G. As shown in Figure 2-4, the two switches are connected via 10GE links called virtual switch links (VSLs), which makes them seem as a single switch. The key benefits of VSS include the following:
Layer 3 switching can be used toward the access layer, enhancing nonstop communication.
Scales system bandwidth up to 1.44 Tbps.
Simplified management of a single configuration of the VSS distribution switch.
Better return on investment (ROI) via increased bandwidth between the access layer and the distribution layer.
Supported on Catalyst 4500, 6500, and 6800 switches.
Figure 2-4 VSS
Hub-and-Spoke Design
For designing networks, the hub-and-spoke design provides better convergence times than ring topology. The hub-and-spoke design, illustrated in Figure 2-5, also scales better and is easier to manage than ring or mesh topologies. For example, implementing security policies in a full mesh topology would become unmanageable because you would have to configure policies at each point location.
Figure 2-5 Hub-and-spoke design
Collapsed Core Design
One alternative to the three-layer hierarchy is the collapsed core design. It is a two-layer hierarchy used with smaller networks. It is commonly used on sites with a single building with just multiple floors. As shown in Figure 2-6, the core and distribution layers are merged, providing all the services needed for those layers. Design parameters to decide if you need to migrate to the three-layer hierarchy include not enough capacity and throughput at the distribution layer, network resiliency, and geographic dispersion.
Figure 2-6 Collapsed core design
Cisco Enterprise Architecture Model
The Cisco Enterprise Architecture model facilitates the design of larger, more scalable networks.
As networks become more sophisticated, it is necessary to use a more modular approach to design than just WAN and LAN core, distribution, and access layers. The architecture divides the network into functional network areas and modules. These areas and modules of the Cisco Enterprise Architecture are
Enterprise campus area
Enterprise data center module
Enterprise branch module
Enterprise teleworker module
The Cisco Enterprise Architecture model maintains the concept of distribution and access components connecting users, WAN services, and server farms through a high-speed campus backbone. The modular approach in design should be a guide to the network architect. In smaller networks, the layers can collapse into a single layer, even a single device, but the functions remain.
Figure 2-7 shows the Cisco Enterprise Architecture model. The enterprise campus area contains a campus infrastructure that consists of core, building distribution, and building access layers, with a data center module. The enterprise edge area consists of the Internet, e-commerce, VPN, and WAN modules that connect the enterprise to the service provider’s facilities. The SP edge area provides Internet, public switched telephone network (PSTN), and WAN services to the enterprise.
Figure 2-7 Cisco Enterprise Architecture model
The network management servers reside in the campus infrastructure but have tie-ins to all the components in the enterprise network for monitoring and management.
The enterprise edge connects to the edge-distribution module of the enterprise campus. In small and medium sites, the edge distribution can collapse into the campus backbone component. It provides connectivity to outbound services that are further described in later sections.
Enterprise Campus Module
The enterprise campus consists of the following submodules:
Campus core
Building distribution and aggregation switches
Building access
Server farm/data center
Figure 2-8 shows the Enterprise Campus model. The campus infrastructure consists of the campus core, building distribution, and building access layers. The campus core provides a high-speed switched backbone between buildings, to the server farm, and towards the enterprise edge. This segment consists of redundant and fast-convergence connectivity. The building distribution layer aggregates all the closet access switches and performs access control, QoS, route redundancy, and load balancing. The building access switches provide VLAN access, PoE for IP phones and wireless access points, broadcast suppression, and spanning tree.
Figure 2-8 Enterprise Campus model
The server farm or data center provides high-speed access and high availability (redundancy) to the servers. Enterprise servers such as file and print servers, application servers, email servers, Dynamic Host Configuration Protocol (DHCP) servers, and Domain Name System (DNS) servers are placed in the server farm. Cisco Unified CallManager servers are placed in the server farm for IP telephony networks. Network management servers are located in the server farm, but these servers link to each module in the campus to provide network monitoring, logging, trending, and configuration management.
An enterprise campus infrastructure can apply to small, medium, and large locations. In most instances, large campus locations have a three-tier design with a wiring-closet component (building access layer), a building distribution layer, and a campus core layer. Small campus locations likely have a two-tier design with a wiring-closet component (Ethernet access layer) and a backbone core (collapsed core and distribution layers). It is also possible to configure distribution functions in a multilayer building access device to maintain the focus of the campus backbone on fast transport. Medium-sized campus network designs sometimes use a three-tier implementation or a two-tier implementation, depending on the number of ports, service requirements, manageability, performance, and availability required.
Enterprise Edge Area
As shown in Figure 2-9, the enterprise edge consists of the following submodules:
Business web applications and databases, e-commerce networks and servers
Internet connectivity and demilitarized zone (DMZ)
VPN and remote access
Enterprise WAN connectivity
Figure 2-9 Enterprise Edge module
E-Commerce Module
The e-commerce submodule of the enterprise edge provides highly available networks for business services. It uses the high availability designs of the server farm module with the Internet connectivity of the Internet module. Design techniques are the same as those described for these modules. Devices located in the e-commerce submodule include the following:
Web and application servers: Primary user interface for e-commerce navigation
Database servers: Contain the application and transaction information
Firewall and firewall routers: Govern the communication between users of the system
Network intrusion prevention systems (IPS): Provide monitoring of key network segments in the module to detect and respond to attacks against the network
Multilayer switch with IPS modules: Provide traffic transport and integrated security monitoring
Internet Connectivity Module
The Internet submodule of the enterprise edge provides services such as public servers, email, and DNS. Connectivity to one or several Internet service providers (ISPs) is also provided. Components of this submodule include the following:
Firewall and firewall routers: Provide protection of resources, stateful filtering of traffic, and VPN termination for remote sites and users
Internet edge routers: Provide basic filtering and multilayer connectivity
FTP and HTTP servers: Provide for web applications that interface the enterprise with the world via the public Internet
SMTP relay servers: Act as relays between the Internet and the intranet mail servers
DNS servers: Serve as authoritative external DNS servers for the enterprise and relay internal requests to the Internet
Several models connect the enterprise to the Internet. The simplest form is to have a single circuit between the enterprise and the SP, as shown in Figure 2-10. The drawback is that you have no redundancy or failover if the circuit fails.
Figure 2-10 Simple Internet connection
You can use multihoming solutions to provide redundancy or failover for Internet service. Figure 2-11 shows four Internet multihoming options:
Option 1: Single router, dual links to one ISP
Option 2: Single router, dual links to two ISPs
Option 3: Dual routers, dual links to one ISP
Option 4: Dual routers, dual links to two ISPs
Figure 2-11 Internet multihoming options
Option 1 provides link redundancy but does not provide ISP and local router redundancy. Option 2 provides link and ISP redundancy but does not provide redundancy for a local router failure. Option 3 provides link and local router redundancy but does not provide for an ISP failure. Option 4 provides for full redundancy of the local router, links, and ISPs.
VPN/Remote Access
The VPN/remote access module of the enterprise edge provides remote-access termination services, including authentication for remote users and sites. Components of this submodule include the following:
Firewalls: Provide stateful filtering of traffic, authenticate trusted remote sites, and provide connectivity using IPsec tunnels
Dial-in access concentrators: Terminate legacy dial-in connections and authenticate individual users
Cisco Adaptive Security Appliances (ASAs): Terminate IPsec tunnels, authenticate individual remote users, and provide firewall and intrusion prevention services
Network intrusion prevention system (IPS) appliances
If you use a remote-access terminal server, this module connects to the PSTN. Today’s networks often prefer VPNs over remote-access terminal servers and dedicated WAN links. VPNs reduce communication expenses by leveraging the infrastructure of SPs. For critical applications, the cost savings might be offset by a reduction in enterprise control and the loss of deterministic service. Remote offices, mobile users, and home offices access the Internet using the local SP with secured IPsec tunnels to the VPN/remote access submodule via the Internet submodule.
Figure 2-12 shows a VPN design. Branch offices obtain local Internet access from an ISP. Teleworkers also obtain local Internet access. VPN software creates secured VPN tunnels to the VPN server that is located in the VPN submodule of the enterprise edge.
Figure 2-12 VPN architecture
Enterprise WAN
The enterprise edge of the enterprise WAN includes access to WANs. WAN technologies include the following:
Multiprotocol Label Switching (MPLS)
Metro Ethernet
Leased lines
Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH)
PPP
Frame Relay
ATM
Cable
Digital subscriber line (DSL)
Wireless
Chapter 6, “WAN Technologies and the Enterprise Edge,” and Chapter 7, “WAN Design,” cover these WAN technologies. Routers in the enterprise WAN provide WAN access, QoS, routing, redundancy, and access control to the WAN. Of these WAN technologies, MPLS is the most popular WAN technology used today. For MPLS networks, the WAN routers prioritize IP packets based on configured differentiated services code point (DSCP) values to use one of several MPLS QoS levels. Figure 2-13 shows the WAN module connecting to the Frame Relay SP edge. The enterprise edge routers in the WAN module connect to the SP’s Frame Relay switches.
Figure 2-13 WAN module
Use the following guidelines when designing the enterprise edge:
Determine the connection needed to connect the corporate network to the Internet. These connections are assigned to the Internet connectivity module.
Create the e-commerce module for customers and partners that require Internet access to business and database applications.
Design the remote access/VPN module for VPN access to the internal network from the Internet. Implement the security policy and configure authentication and authorization parameters.
Assign the edge sections that have permanent connections to remote branch offices. Assign these to the WAN, metro area network (MAN), and VPN module.
Service Provider Edge Module
The SP edge module, shown in Figure 2-14, consists of SP edge services such as the following:
Internet services
PSTN services
WAN services
Figure 2-14 WAN/Internet SP edge module
Enterprises use SPs to acquire network services. ISPs offer enterprises access to the Internet. ISPs can route the enterprise’s networks to their network and to upstream and peer Internet providers. ISPs can provide Internet services via Ethernet, DSL, or T1/DS3 access. It is common now for the SP to have their ISP router at the customer site and provide Ethernet access to the customer. Connectivity with multiple ISPs was described in the section “Internet Connectivity Module.”
For voice services, PSTN providers offer access to the global public voice network. For the enterprise network, the PSTN lets dialup users access the enterprise via analog or cellular wireless technologies. It is also used for WAN backup using ISDN services.
WAN SPs offer MPLS, Frame Relay, ATM, and other WAN services for enterprise site-to-site connectivity with permanent connections. These and other WAN technologies are described in Chapter 6.
Remote Modules
The remote modules of the Cisco Enterprise Architecture model are the enterprise branch, enterprise data center, and enterprise teleworker modules.
Enterprise Branch Module
The enterprise branch normally consists of remote offices or sales offices. These branch offices rely on the WAN to use the services and applications provided in the main campus. Infrastructure at the remote site usually consists of a WAN router and a small LAN switch, as shown in Figure 2-15. As an alternative to MPLS, it is common to use site-to-site IPsec VPN technologies to connect to the main campus.
Figure 2-15 Enterprise branch module
Enterprise Data Center Module
The enterprise data center uses the network to enhance the server, storage, and application servers. The offsite data center provides disaster recovery and business continuance services for the enterprise. Highly available WAN services are used to connect the enterprise campus to the remote enterprise data center. The data center components include the following:
Network infrastructure: Gigabit and 10 Gigabit Ethernet, InfiniBand, optical transport, and storage switching
Interactive services: Computer infrastructure services, storage services, security, and application optimization
DC management: Cisco Fabric Manager and Cisco VFrame for server and service management
The enterprise data center is covered in detail in Chapter 4, “Data Center Design.”
Enterprise Teleworker Module
The enterprise teleworker module consists of a small office or a mobile user who needs to access services of the enterprise campus. As shown in Figure 2-16, mobile users connect from their homes, hotels, or other locations using dialup or Internet access lines. VPN clients are used to allow mobile users to securely access enterprise applications. The Cisco Virtual Office solution provides a solution for teleworkers that is centrally managed using small integrated service routers (ISRs) in the VPN solution. IP phone capabilities are also provided in the Cisco Virtual Office solution, providing corporate voice services for mobile users.
Figure 2-16 Enterprise teleworker solution
Table 2-3 summarizes the Cisco Enterprise Architecture.
Figure 2-17 HSRP: The phantom router represents the real routers
In Figure 2-17, the following sequence occurs:
The workstation is configured to use the phantom router (192.168.1.1) as its default router.
Upon booting, the routers elect Router A as the HSRP active router. The active router does the work for the HSRP phantom. Router B is the HSRP standby router.
When the workstation sends an ARP frame to find its default router, Router A responds with the phantom router’s MAC address.
If Router A goes offline, Router B takes over as the active router, continuing the delivery of the workstation’s packets. The change is transparent to the workstation.
VRRP
VRRP is a router redundancy protocol defined in RFC 3768. RFC 5768 defined VRRPv3 for both IPv4 and IPv6 networks. VRRP is based on Cisco’s HSRP, but is not compatible. VRRP specifies an election protocol that dynamically assigns responsibility for a virtual router to one of the VRRP routers on a LAN. The VRRP router controlling the IP addresses associated with a virtual router is called the master, and it forwards packets sent to these IP addresses. The election process provides dynamic failover in the forwarding responsibility should the master become unavailable. This allows any of the virtual router IP addresses on the LAN to be used as the default first-hop router by end hosts. The virtual router backup assumes the forwarding responsibility for the virtual router should the master fail.
GLBP
GLBP protects data traffic from a failed router or circuit, such as HSRP, while allowing packet load sharing between a group of redundant routers. Methods for load balancing with HSRP and VRRP work with small networks, but GLBP allows for first-hop load balancing on larger networks.
The difference in GLBP from HSRP is that it provides for load balancing between multiple redundant routers—up to four gateways in a GLBP group. It load-balances by using a single virtual IP address and multiple virtual MAC addresses. Each host is configured with the same virtual IP address, and all routers in the virtual router group participate in forwarding packets. By default, all routers within a group forward traffic and load-balance automatically. GLBP members communicate between each other through hello messages sent every three seconds to the multicast address 224.0.0.102, User Datagram Protocol (UDP) port 3222. GLBP benefits include the following:
Load sharing: GLBP can be configured in a way that traffic from LAN clients can be shared by multiple routers.
Multiple virtual routers: GLBP supports up to 1024 virtual routers (GLBP groups) on each physical interface of a router.
Preemption: GLBP enables you to preempt an active virtual gateway with a higher-priority backup.
Authentication: Simple text password authentication is supported.
Server Redundancy
Some environments need fully redundant (mirrored) file and application servers. For example, in a brokerage firm where traders must access data to buy and sell stocks, two or more redundant servers can replicate the data. Also, you can deploy Cisco Unified Communications Manager (CUCM) servers in clusters for redundancy. The servers should be on different networks and use redundant power supplies. To provide high availability in the server farm module, you have the following options:
Single attachment: This is not recommended because it requires alternate mechanisms (HSRP, GLBP) to dynamically find an alternate router.
Dual attachment: This solution increases availability by using redundant network interface cards (NIC).
Fast EtherChannel (FEC) and Gigabit EtherChannel (GEC) port bundles: This solution bundles 2 or 4 Fast or Gigabit Ethernet links to increase bandwidth.
Route Redundancy
Designing redundant routes has two purposes: balancing loads and increasing availability.
Load Balancing
Most IP routing protocols can balance loads across parallel links that have equal cost. Use the maximum-paths command to change the number of links that the router will balance over for IP; the default is four, and the maximum is six. To support load balancing, keep the bandwidth consistent within a layer of the hierarchical model so that all paths have the same cost. (Cisco Enhanced Interior Gateway Routing Protocol [EIGRP] is an exception because it can load-balance traffic across multiple routes that have different metrics by using a feature called variance.)
A hop-based routing protocol does load balancing over unequal-bandwidth paths as long as the hop count is equal. After the slower link becomes saturated, packet loss at the saturated link prevents full utilization of the higher-capacity links; this scenario is called pinhole congestion. You can avoid pinhole congestion by designing and provisioning equal-bandwidth links within one layer of the hierarchy or by using a routing protocol that takes bandwidth into account.
IP load balancing in a Cisco router depends on which switching mode the router uses. Process switching load balances on a packet-by-packet basis. Fast, autonomous, silicon, optimum, distributed, and NetFlow switching load balances on a destination-by-destination basis because the processor caches information used to encapsulate the packets based on the destination for these types of switching modes.
Increasing Availability
In addition to facilitating load balancing, redundant routes increase network availability.
You should keep bandwidth consistent within a given design component to facilitate load balancing. Another reason to keep bandwidth consistent within a layer of a hierarchy is that routing protocols converge much faster on multiple equal-cost paths to a destination network.
By using redundant, meshed network designs, you can minimize the effect of link failures. Depending on the convergence time of the routing protocols, a single link failure cannot have a catastrophic effect.
You can design redundant network links to provide a full mesh or a well-connected partial mesh. In a full-mesh network, every router has a link to every other router, as shown in Figure 2-18. A full-mesh network provides complete redundancy and also provides good performance because there is just a single-hop delay between any two sites. The number of links in a full mesh is n(n–1)/2, where n is the number of routers. Each router is connected to every other router. A well-connected partial-mesh network provides every router with links to at least two other routing devices in the network.
Figure 2-18 Full-mesh network: Every router has a link to every other router in the network.
A full-mesh network can be expensive to implement in WANs because of the required number of links. In addition, groups of routers that broadcast routing updates or service advertisements have practical limits to scaling. As the number of routing peers increases, the amount of bandwidth and CPU resources devoted to processing broadcasts increases.
A suggested guideline is to keep broadcast traffic at less than 20 percent of the bandwidth of each link; this amount limits the number of peer routers that can exchange routing tables or service advertisements. When designing for link bandwidth, reserve 80 percent of it for data, voice, and video traffic so that the rest can be used for routing and other link traffic. When planning redundancy, follow guidelines for simple, hierarchical design. Figure 2-19 illustrates a classic hierarchical and redundant enterprise design that uses a partial-mesh rather than a full-mesh topology. For LAN designs, links between the access and distribution layers can be Fast Ethernet, with links to the core at Gigabit Ethernet speeds.
Figure 2-19 Partial-mesh design with redundancy
Link Media Redundancy
In mission-critical applications, it is often necessary to provide redundant media.
In switched networks, switches can have redundant links to each other. This redundancy is good because it minimizes downtime, but it can result in broadcasts continuously circling the network, which is called a broadcast storm. Because Cisco switches implement the IEEE 802.1d spanning-tree algorithm, you can avoid this looping in Spanning Tree Protocol (STP). The spanning-tree algorithm guarantees that only one path is active between two network stations. The algorithm permits redundant paths that are automatically activated when the active path experiences problems.
STP has a design limitation of only allowing one of the redundant paths to be active. VSS can be used with Catalyst 6500 switches to overcome this limitation.
You can use EtherChannel to bundle links for load balancing. Links are bundled in powers of 2 (2, 4, 8) groups. It aggregates the bandwidth of the links. Hence, two 10GE ports become 20 Gbps of bandwidth when they are bundled. For more granular load balancing, use a combination of source and destination per-port load balancing if available on the switch. In current networks, EtherChannel uses LACP, which is a standard-based negotiation protocol that is defined in IEEE 802.3ad (an older solution included the Cisco proprietary PAgP protocol). LACP helps protect against Layer 2 loops that are caused by misconfiguration. One downside is that it introduces overhead and delay when setting up the bundle.
Because WAN links are often critical pieces of the internetwork, WAN environments often deploy redundant media. As shown in Figure 2-20, you can provision backup links so that they become active when a primary link goes down or becomes congested.
Figure 2-20 Backup links can provide redundancy.
Often, backup links use a different technology. For example, it is common to use Internet VPNs to back up primary MPLS links in today’s networks. By using floating static routes, you can specify that the backup route must have a higher administrative distance (used by Cisco routers to select routing information) so that it is not normally used unless the primary route goes down.
Cisco supports Multilink Point-to-Point Protocol (MPPP), which is an Internet Engineering Task Force (IETF) standard for ISDN B-channel (or asynchronous serial interface) aggregation. It bonds multiple WAN links into a single logical channel. MPPP is defined in RFC 1990. MPPP does not specify how a router should accomplish the decision-making process to bring up extra channels. Instead, it seeks to ensure that packets arrive in sequence at the receiving router. Then, the data is encapsulated within PPP and the datagram is given a sequence number. At the receiving router, PPP uses this sequence number to re-create the original data stream. Multiple channels appear as one logical link to upper-layer protocols. For Frame Relay networks, FRF.16.1 Multilink Frame Relay is used to perform a similar function.
Table 2-4 summarizes the four main redundancy models.
Figure 2-21 Scenario for questions 28–33
Which is the campus core layer?
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Which is the enterprise edge?
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Which is the campus access layer?
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Which is the enterprise edge distribution?
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Which is the campus distribution layer?
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Which is the campus data center?
Block 1
Block 2
Block 3
Block 4
Block 5
Block 6
Which solution supports the enterprise teleworker?
IP telephony
Enterprise campus
Cisco Virtual Office
SP edge
Hierarchical design
Data Center 3.0
Which are two benefits of using a modular approach?
Simplifies the network design
Reduces the amount of network traffic on the network
Often reduces the cost and complexity of the network
Makes the network simple by using full mesh topologies
Which three modules provide infrastructure for remote users? (Select three.)
Teleworker module
WAN module
Enterprise branch module
Campus module
Enterprise data center
Core, distribution, access layers
Which are borderless networks infrastructure services? (Select three.)
IP telephony
Security
QoS
SP edge
High availability
Routing
Which module contains devices that supports AAA and stores passwords?
WAN module
VPN module
Server farm module
Internet connectivity module
SP edge
TACACS
Which topology is best used for connectivity in the building distribution layer?
Full mesh
Partial mesh
Hub and spoke
Dual ring
EtherChannel
What are two ways that wireless access points are used? (Choose two.)
Function as a hub for wireless end devices
Connect to the enterprise network
Function as a Layer 3 switch for wireless end devices
Provide physical connectivity for wireless end devices
Filter out interference from microwave devices
In which ways does application network services help resolve application issues? (Choose two.)
It can compress, cache, and optimize content.
Optimizes web streams, which can reduce latency and offload the web server.
Having multiple data centers increases productivity.
Improves application response times by using faster servers.
Which are key features of the distribution layer? (Select three.)
Aggregates access layer switches
Provides a routing boundary between access and core layers
Provides connectivity to end devices
Provides fast switching
Provides transport to the enterprise edge
Provides VPN termination
Which Cisco solution allows a pair of switches to act as a single logical switch?
HSRP
VSS
STP
GLB
Which module or layer connects the server layer to the enterprise edge?
Campus distribution layer
Campus data center access layer
Campus core layer
Campus MAN module
WAN module
Internet connectivity module
Which server type is used in the Internet connectivity module?
Corporate
Private
Public
Internal
Database
Application
Which server types are used in the e-commerce module for users running applications and storing data? (Select three.)
Corporate
Private
Public
Internet
Database
Application
Web
Which are submodules of the enterprise campus module? (Select two.)
WAN
LAN
Server farm/data center
Enterprise branch
VPN
Building distribution
Which are the three layers of the hierarchical model? (Select three.)
WAN layer
LAN layer
Core layer
Aggregation layer
Access layer
Distribution layer
Edge layer
You need to design for a packet load-sharing between a group of redundant routers. Which protocol allows you to do this?
HSRP
GLBP
VRRP
AARP
Which is a benefit of using network modules for network design?
Network availability increases.
Network becomes more secure.
Network becomes more scalable.
Network redundancy is higher.
The Cisco Enterprise Architecture takes which approach to network design?
It takes a functional modular approach.
It takes a sectional modular approach.
It takes a hierarchical modular approach.
It takes a regional modular approach.
Which is the recommended design geometry for routed networks?
Design linear point-to-point networks
Design in rectangular networks
Design in triangular networks
Design in circular networks
Which layer performs rate limiting, network access control, and broadcast suppression?
Core layer
Distribution layer
Access layer
Data link layer
Which layer performs routing between VLANs, filtering, and load balancing?
Core layer
Distribution layer
Access layer
Application layer
Which topology allows for maximum growth?
Triangles
Collapsed core-distribution
Full mesh
Core-distribution-access
Which layer performs port security and DHCP snooping?
Core layer
Distribution layer
Access layer
Application layer
Which layer performs Active Directory and messaging?
Core layer
Distribution layer
Access layer
Application layer
Which layers perform redundancy? (Select two.)
Core layer
Distribution layer
Access layer
Data Link Layer
Which statement is true regarding hierarchical network design?
Makes the network harder since there are many submodules to use
Provides better performance and network scalability
Prepares the network for IPv6 migration from IPv4
Secures the network with access filters in all layers
Based on Figure 2-22, and assuming that devices may be in more than one layer, list which devices are in each layer.
Figure 2-22 Question 60
Access layer:
Distribution layer:
Core:
Use Figure 2-23 to answer questions 61–63.
Figure 2-23 Scenario for questions 61–63
Which section(s) belong(s) to the core layer?
Which section(s) belong(s) to the distribution layer?
Which section(s) belong(s) to the access layer?