The elements of network designhierarchy, redundancy, addressing, and summarizationhave been addressed in relative isolation up to this point. The following list groups them together:
HierarchyProvides a logical foundation, the "skeleton" on which addresses "hang."
AddressingIsn't just for finding networks and hosts; it also provides points of summarization.
SummarizationThe primary tool used to bound the area affected by network changes.
Stability/ReliabilityProvided by bounding the area affected by changes in the network.
RedundancyProvides alternate routes around single points of failure.
Figure 4-1 shows the traffic and routing table patterns throughout a well-designed hierarchical network. (You may recognize Figure 4-1 because you have seen pieces of it in previous chapters.) Note that the routing table size is managed through summarization; so, no single layer has an overwhelming number of routes, and no single router must compute routes to every destination in the network if a change does occur.
Figure 4-1 Traffic and Routes in a Well-Designed Network
How do you design a network so that the routes and traffic are well-behaved? By managing the size of the routing table. Managing the size of the routing table is critical in large-scale network design.
The primary means of controlling the routing table size in a network is through summarization, which was covered in detail in Chapter 3, "Redundancy." Summarization is highly dependent on correct addressing. Therefore, the routing table size, summarization, and addressing (the three basics of highly scalable networks) are closely related.
To illustrate these principles, this chapter begins with a network that is experiencing stability problems and "reforms" it to make it stable and scalable. This exercise applies the principles discussed in the first three chapters of this book.
Reforming an Unstable Network
This section of the chapter reforms the network shown in Figure 4-2. Because this is a rather large network, only one small section is tackled at a time. This chapter covers how to implement changes in the topology and addressing, which can improve this network. Chapter 5, "OSPF Network Design," Chapter 6, "IS-IS Network Design," and Chapter 7, "EIGRP Network Design" address how to implement routing protocols on this network.
Figure 4-2 An Unstable Network
This exercise begins with the core of the network and works outward to the distribution and access layers as detailed in the following sections.
Examining the Network Core
As you consider the core of this network, it's good to remember the design goals that you worked through for network cores back in Chapter 1, "Hierarchical Design Principles." As your primary concerns, focus on switching speed and providing full reachability without policy implementations in the network core.
The first problem in the network illustrated in Figure 4-2 is that the core has too much redundancythis is a fully-meshed design with 5∴(51) = 20 paths. The primary exercise here is to determine which links can be eliminated. To do this, you need to narrow your focus a bit; Figure 4-3 shows only the core and its direct connections.
Figure 4-3 The Network Core
Network traffic in the network illustrated in Figure 4-3 flows between the common services and external connections to and from the HQ VLANs and the networks behind the distribution layer. A diagram of this network traffic reveals that most traffic flows:
From the networks behind Routers A, C, and D to the networks behind Router E
From the networks behind Routers A, C, and D to the networks behind Router B
Because there won't be much traffic flowing between Router A and Router C or Router A and Router D, these are the two best links to remove. Removing these two links will reduce the core to a partially-meshed network with fewer paths and more stability. The total number of paths through the core will be cut from 20 to 6, at most, for any particular destination.
Beyond the hyper-redundancy, there are also network segments with hosts connected directly to Router Athe corporate LAN VLAN trunks. Terminating the corporate VLANs directly into Router A means:
Router A must react to any changes in the status of corporate VLAN.
Any access controls that need to be applied to hosts attached to one of the corporate VLANs must be configured (and managed) on a core router.
For these reasons, a router will be placed between Router A and the corporate VLANs. Adding this router moves summarization and policy implementation onto the new router, which helps to maintain the goals of the core. Remember, the core's primary function should be switching packets and not summarization or policy implementation.
Finally, after dealing with the physical topology issues, you can examine the IP addresses used in the core of the network; they are all in the 172.16.3.x range of addresses. Can you summarize this address space out toward the distribution layer (and the other outlying pieces of the network)?
To answer this question, you'll need to see if other networks are in the same range of addresses. In this case, 172.16.2.x and 172.16.4.x are both corporate VLANs (refer to Figure 4-2), which effectively eliminates the capability to summarize not only links in and around the core of the network but also the networks within the corporate VLAN.
You have two options: Leave the addresses as they are, which could actually work in this situation, or renumber the links in the core. Because you don't want to worry about this problem again, readdressing the links between the core routers is the preferred option. You need to replace the 172.16.3.x address space that is currently used in the core with something that isn't used elsewhere in the network and that won't affect the capability to summarize in any other area of the network. Unfortunately, choosing a good address space in a network that is already in daily use is difficult.
A quick perusal of the IP addresses in use shows the following:
172.16.0.x through 172.16.15.x are corporate VLANs; to make this a block that can be summarized, you can end it at 172.16.15.x, summarized to 172.16.0.0/20.
172.16.17.x through 172.16.19.x consist of server farm and mainframe connectivity; to make this a block that can be summarized, you can end it at 172.16.23.x, summarized to 172.16.16.0/21.
Subnets of 172.16.20.x are all used for connections to external networks.
172.16.22.x is used for dial-in clients and other connections.
172.16.25.x through 172.16.43.x are used for one set of remote sites.
172.16.66.x through 172.16.91.x are used for another set of remote sites.
These are all the 172.16.xx.x's currently in use. The point-to-point links in and around the core use 30-bit masks, so you need a block of only 255 addresses (a block that can be summarized into a single, Class C range). The lowest such block not currently in use is 172.16.21.0/24; therefore, the links in and around the core using this address space need to be renumbered.
If You Didn't Readdress the Core Links. . .
It's possible to rely on the way routers choose the best path to overcome the overlapping address space between the core and the HQ VLANs without readdressing the links in the network core.
You do, however, need to summarize the routes advertised from the HQ VLANs anyway. Because the routers within the core are going to have more specific (longer prefix) routes to any destination within the core, everything will work.
Relying on leaked, longer prefixes to provide correct routing is not recommended because the prefixes can be difficult to maintain, and simple configuration mistakes can cause major side effects. But it is useful to consider this option if you are in a position where networks can't be renumbered to summarize correctly.
Figure 4-4 provides an illustration of what the redesigned core from Figure 4-2 looks like after these changes:
Removing the excessive redundancy in the core by removing two point-to-point links
Adding a single router between the core and the HQ VLANs to move policy implementation and summarization out of the core
Renumbering the point-to-point links in the core
Figure 4-4 Redesigned Network Core
After redesigning the core and improving network stability for the network shown in Figure 4-2, you need to look at the distribution and access layers for possible improvements.
Distribution Layer and Access Layer Topology
As you work through the access and distribution area of this network, keep the goals of the layers in mind. The goals for the distribution layer are as follows:
Control the routing table size by isolating topology changes through summarization.
Aggregate traffic.
The goals for the access layer are as follows:
Feed traffic into the network.
Control access into the network, implement any network policies, and perform other edge services as needed.
Because the design of the distribution and access layers is so tightly coupled, you need to examine them together. Figure 4-5 focuses on the distribution and access layers and the Frame Relay links that connect them. This way you can more easily understand them in context with the discussion that follows.
Figure 4-5 Distribution and Access Layers
At the distribution layer, Routers A, B, C, and D are currently cross connected, and they each have only one connection to the core. This produces major problems in summarization and the number of paths to a given network within the core. For example, to reach 172.16.98.0/24, a router in the core has the following possible paths:
- Core, Router B, Cloud H
- Core, Router A, Router B, Cloud H
- Core, Router C, Router B, Cloud H
- Core, Router D, Router C, Router B, Cloud H
- Core, Router C, Cloud J
- Core, Router D, Router C, Cloud J
- Core, Router B, Router C, Cloud J
- Core, Router A, Router B, Router C, Cloud J
Furthermore, if a host that is connected to the 172.16.98.0/24 network sends a packet toward the 172.16.66.0/24 network, it will most likely end up traveling across the link between Router C and Router B rather than traversing the core. This can defeat traffic engineering and cause other stability problems.
The most obvious solution is to simply dual home each of the distribution layer routers to the core rather than connecting directly between them. (Dual home means to connect each distribution layer router to two core routers rather than one.)
After this change, there is still a single point of failure to consider: If Router A fails, the remote networks 172.16.25.0/24 through 172.16.43.0/24 will lose all connectivity to the rest of the network. You can resolve this problem by simply providing these networks with another link to the distribution layer through Router B.
Adding this link means Router B now has three Frame Relay connections; Router A and Router C have two; and Router D has one. Depending on the type of router and traffic handling factors, you may need to even out how many connections each router has. The following adjustments to where the frame links connect leave two connections per distribution layer router:
Move the link between Cloud H and Router B to Router C; this leaves Router B with only two Frame Relay connections.
Move the link between Cloud J and Router C to Router D; this leaves Router C with two Frame Relay connections and adds one to Router D for a total of two.
Note that moving these links around is necessary only if there are issues with traffic handling or port density on the distribution layer routers. Load balancing might also be improved by these links. Moving the links uncovers some possibilities in evening out the links attached to each router. Figure 4-6 illustrates what the network looks like after making these link changes.
Figure 4-6 Modified Distribution and Access Layers
These modifications leave a plethora of paths; normally, there are four ways to reach any access layer network from the core. For example, the 172.16.25.0/24 network has the following paths:
- Cloud E, Router A, Core (through 172.16.21.12/30)
- Cloud E, Router A, Core (through 10.1.1.26/26)
- Cloud M, Router B, Core (through 172.16.21.8/30)
- Cloud M, Router B, Core (through the alternate link)
A single failure (for example, Router A) leaves two paths through Router B. A second failure (Frame Relay Cloud M, for example) isolates the remote networks. If the second failure isolates the remote network anyway, why leave in the extra redundancy?
Figure 4-7 shows the network after removing the extra (redundant) links between the core and the distribution layer routers, which leaves two paths between the core and any remote network.
Figure 4-7 Final Topology Modifications in Distribution and Access Layers
So far, then, you have moved some links around in between the distribution layer and the core to provide better points of summarization. You have also removed some redundancy, which, it turns out, is overkill. The next step is to make any possible changes in addressing in the distribution and access layers to improve stability.
Overhead in Routing Protocols
There are two things engineers yearn for in a good routing protocol: instantaneous convergence and no overhead. Since that is not possible, it is necessary to settle for a low overhead protocol with very fast convergence. But what defines low overhead?
One major component of routing protocol overhead is interruption due to updates. You don't want to use a routing protocol that interrupts every host on the network every 30 seconds with a routing update (like Routing Information Protocol [RIP] does). To combat update overhead, routing protocols attempt to reduce the scope and the frequency of interruptions.
One technique used by routing protocols is to reduce the scope of the updates, which means to reduce the number of hosts that will hear the update packet. Broadcast is the worst possible medium for sending updatesevery host on the wire is forced to look at the packet and decide whether or not it is interesting. Only a few hosts on a network are interested in the routing updates, so using the broadcast mechanism to send routing updates is a massive waste of time and resources.
To get around this problem, routing protocols use either multicast or unicast routing updates. Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), and Intermediate System-to-Intermediate System (IS-IS) all use well-known multicast addresses for their routing updates so that hosts and other computers that aren't interested in the updates can filter them out at the hardware layer. Border Gateway Protocol (BGP) uses unicast routing updates, which is even better, but does require special configuration to work (neighbor statements).
Another technique used to reduce the overhead in a routing protocol is to reduce the frequency of the updates. RIP, which advertises all known destinations every 30 seconds, uses a great deal of bandwidth.
OSPF is periodic, timing its table out every 30 minutes; 30 minutes is much more efficient than 30 seconds. In between these 30-minute intervals, OSPF counts on flooding unreachables as a mechanism for discovering invalid paths. EIGRP and BGP never time their tables out. BGP relies on a withdraw mechanism to discover invalid paths, and EIGRP relies on a system of queries to discover invalid paths.
Routing protocols reduce network overhead by reducing the number of packets required to provide other routers with the routing information they need. Routing protocols use fancy encoding schemes to fit more information into each packet. For example, whereas RIP can fit 25 route updates in a single routing update packet, IGRP can fit 104.
Routing protocols also use incremental updates to reduce the number of packets required to do the job. Rather than a router advertising its full routing table every so often, it only advertises changes in its routing table. This reduces the amount of processing time required to recalculate when changes occur in the network, and it also reduces the amount of bandwidth the routing protocol consumes.
For more information on how OSPF, EIGRP, and BGP operate, please see Appendix A, "OSPF Fundamentals;" Appendix C, "EIGRP Fundamentals;" and Appendix D, "BGP Fundamentals." These appendixes explain in further detail how each of these protocols decides when to send routing updates.
In general, routing protocol overhead should be considered when choosing which protocol to use. Because the design of the network has some bearing on what the overhead will be, there is no absolute answer. You need to understand the burden that every protocol will place on your network before deciding.
Distribution and Access Layer Addressing
Now that you've built good physical connectivity, you need to address the distribution and access layers. The addressing of the links between the core and the distribution layer looks okay; these links are addresses from the core's address space. Because the only real summarization that can take place is the summarization of the entire core into one advertisement for all the outlying areas of the network, the addressing that's in place will work.
The addressing between the access and distribution routers, however, is a mess. Some of the Frame Relay clouds are using 172.16.x.x addresses, which fit into the same address space as the dial-in clients, while other clouds are using address space that isn't used anyplace else in the network, such as 192.168.10.0/26.
How do you make sense out of this? If you number these links from an address space not already in use someplace else, as you did for the core, you won't be able to summarize them in, or group them with anything else, at the distribution layer. In this case, not being able to summarize these networks means only six extra routes in the corebut if this network grows (remember that the entire objective of network design is to make it possible to grow), then this a problem.
One solution is to steal addresses from the remote site address space to number these links. The remote sites are grouped into blocks that can be summarized as follows:
- 172.16.25.0/24 through 172.16.43.0/24 can be summarized to 172.16.24.0/21 and 172.16.32.0/20.
- 172.16.66.0/24 through 172.16.91.0/24 can be summarized to 172.16.64.0/20.
- 172.16.98.0/24 through 172.16.123.0/24 can be summarized to 172.16.96.0/20.
- 172.17.1.0/24 through 172.17.27.0/24 can be summarized to 172.17.0.0/19.
Note the first set of addresses can be summarized into only two blocks, not one. Looking for summarizations when reworking a network like this one is useful because the address space probably wasn't parceled out with summarization in mind.
The easiest way to find addresses for the Frame Relay clouds is to steal addresses from the summarizable blocks cited in the preceding list. For instance:
- Cloud E can be addressed using 172.16.24.0/26.
- Cloud M can be addressed using 172.16.24.64/26.
- Cloud F can be addressed using 172.16.64.0/26.
- Cloud G can be addressed using 172.16.64.64/26.
- Cloud H can be addressed using 172.16.96.0/26.
- Cloud J can be addressed using 172.16.96.64/26.
- Cloud K can be addressed using 172.17.0.0/26.
- Cloud L can be addressed using 172.16.0.64/26.
Whereas stealing addresses from the remote network address space to number the links between the access and distribution layer routers is good for summarization, it does have one possible drawback: You can lose connectivity to a remote network even though all possible paths to that network are not down.
As an example, consider the remote router and its paths to the network core as illustrated in Figure 4-8.
Figure 4-8 An Individual Remote Router and Its Connections to the Network Core
Assume that both Routers A and B are advertising a summary of 172.16.24.0/21, which is the address space from 172.16.24.0 through 172.16.31.0. Therefore, the summary covers the remote network and the links between the access and distribution layer routers shown in Figure 4-8. Furthermore, assume that Router B is used by the core routers as the preferred path to this summary for whatever reason (link speed, and so forth).
Given these conditions, if the remote router's link onto frame Cloud M fails, all connectivity with the remote network 172.16.25.0/24 will be lost, even though the alternate path is still available. It might be very unlikely, of course, that this could happen, but it is possible and worth considering.
The only solution to this type of problem is for Router A to recognize the condition and advertise the more specific route to the remote network. Unfortunately, this capability doesn't exist today in any Interior Gateway Protocol (IGP); you simply have to be aware that this type of problem can occur and know what to look for.
External Connections
This section separately examines the external connections to the network, as was done for the network core and distribution and access layers (see Figure 4-9).
Figure 4-9 External Connections
It only takes a quick look to see that there are too many links between the core of this network and the external networksthree connections to four partner networks, an Internet connection, and a bank of dial-in clients. Having this many connections to external networks causes problems in two areas: addressing and routing.
External Connection Addressing
If one of the partners illustrated in Figure 4-9 installs a network that happens to use the same address space as an internal network, how do you handle it? You must either coordinate the use of address space with the other network partners, use only registered addresses, or use Network Address Translation (NAT) (refer to Chapter 2, "Addressing & Summarization"). Because this network uses private address space, you're probably already using NAT to get to the Internet. Therefore, it's logical to use NAT to get to external partner networks as well.
But with this many connections to partner networks, where do you run NAT? It's never a good idea to run it on a core routerdon't even consider that. You can run it on Routers B, C, and D, but this connection is very difficult to configure and maintain (especially considering you may need to translate addresses in both directions).
It is much easier to connect the external partner networks to the DeMilitarized Zone (DMZ) and put the network translation on the routers there. You can translate the internal addresses to a registered address space on the way out (as you are most likely already doing) and translate the external addresses, if needed, into something acceptable for the internal address space on Routers B, C, and D. From an addressing perspective, the best solution is to attach Routers B, C, and D to the DMZ.
External Connection Routing
The routing side of the equation is this: Even if the internal and external address spaces don't overlap, you don't want to carry routes to these external networks in all your routers. It is much better to carry a single default route from all external networks into the core of the network.
Once again, from a routing perspective, the best solution is to connect Routers B, C, and D to the DMZ.
Dial-In Clients
What about the dial-in clients? Should you connect these to the DMZ as well? Because these clients are assigned addresses within the internal address space, the addressing problems and routing problems outlined for the network partners don't exist for these clients.
Remember that these clients will likely want to connect to internal hosts that other externally connected clients aren't allowed to see, which means special security considerations are necessary on Router A.
All in all, it's better to leave the dial-in clients directly connected to the core. However, you should not allow the link between Router E and the core to be a single point of failure. For this reason, you need to add a dial backup link from Router E to the core.
You also need to renumber the link between Router E and the core so that it fits into the addressing scheme for the core. Figure 4-10 illustrates the network originally illustrated in Figure 4-02 with all the changes covered thus far in this chapter.
Figure 4-10 The Revised Network with Changes to the Core, Distribution Layer, Access Layer, and External Connections