Tools and Techniques
Network design is both an art and a science. The science involves exploiting various methodologies to meet all the requirements within the given constraints. Each of these methods trades one constrained resource for another. The art involves choosing the best balance between constrained resources, resulting in a network that is future-proofone that will grow to meet increased, or even radically new, requirements.
Modularization and Layering
Two of the most common design and implementation methodologies are those of modularization and layering. Both enable the network problem to be broken down into something more manageable, and both involve the definition of interfaces that enable one module or layer to be modified without affecting others. These benefits usually compensate for inefficiency, due to hidden information between layers or modules. Nevertheless, when designing the interfaces between modules or layers, it is good practice to optimize the common case. For example, if there is a large flow of traffic between two distribution networks, perhaps this flow should be optimized by introducing a new dedicated link into the core network.
Layering typically implies a hierarchical relationship. This is a fundamental technique in network protocol design, as exemplified by the ubiquitous OSI reference model. Modularization typically implies a peer relationship, although a hierarchy certainly can exist between modules. In an upcoming section, "Hierarchy Issues," as well as in many of the remaining chapters in this book, the text continues to emphasize and develop the practice of hierarchy and modularization in network design.
Layering the network control plan above a redundant physical infrastructure is a vital part of resilient network design. Critical control information, such as network management or routing updates, should be exchanged using IP addresses of a virtual interface on the router rather than one associated with a physical interface. In Cisco routers, this can be achieved using loopback interfacesvirtual interfaces that are always active, independent of the state of any physical interfaces.
Another common approach used when a physical address must be used for routing is to permit two or more routers to own the same IP address, but not concurrently. A control protocol, such as Cisco's Hot Standby Router Protocol (HSRP), arbitrates the use of the IP address for routing purposes.
Network Design Elements
Multiplexing is a fundamental element of network design. Indeed, you could argue that a network is typically one huge multiplexing system. More specifically, however, multiplexing is a tool that provides economies of scalemultiple users share one large resource rather than a number of individual resources.
NOTE
Multiplexing is the aggregation of multiple independent traffic flows into one large traffic flow. A useful analogy is the freeway system, which multiplexes traffic from many smaller roads into one large flow. At any time, traffic on the freeway may exit onto smaller roads (and thus be de-multiplexed) when it approaches its final destination.
As an added benefit, if the multiplexing is statistical in nature, one user may consume the unused resources of someone else. During periods of congestion, however, this statistical sharing of the resource might need to be predictable to ensure that basic requirements are met. In IP networks, bandwidth is the resource, and routers provide the multiplexing.
Traditionally, multiplexing has been a best-effort process. However, increasingly deterministic behavior is requiredyou can read about such techniques in Chapter 14, "Quality of Service Features." For now, it suffices to say that multiplexing saves money and can provide performance improvements while guaranteeing a minimum level of service.
Randomization is the process of applying random behavior to an otherwise predictable mechanism. This is an important approach to avoid the synchronization of network data or control traffic that can lead to cyclic congestion or instability. Although critical to the design of routing protocols, congestion control, and multiplexing algorithms, randomization is not currently a major factor in network topology design. However, this may change if load sharing of IP traffic through random path selection is ever shown to be a practical routing algorithm.
Soft state is the control of network functions through the use of control messages that are periodically refreshed. If the soft state is not refreshed, it is removed (or timed out). Soft state is also extremely important to routing functions. When routers crash, it becomes difficult to advise other routers that the associated routing information is invalidated. Nearly all routing information is kept as soft-stateif it is not refreshed, or at the very least reconfirmed in some way, it is eventually removed.
Soft state can be obviated by the use of static or "hard-coded" routes that are never invalidated. Static routes should therefore be used with extreme caution.
Some level of hysterisis or dampening is useful whenever there is the possibility of unbounded oscillation. These techniques are often used for processing routing updates in Interior Gateway Protocols (IGPs). If a route is withdrawn, a router may "hold down" that route for several minutes, even if the route is subsequently re-advertised by the IGP. This prevents an unstable route from rapidly oscillating between the used and unused states because the route can change its state only once per hold-down period.
Similarly, the external routing Border Gateway Protocol (BGP) applies dampening to external routes. This prevents CPU saturation that can occur when repeatedly calculating new routes, if large numbers of routes are involved.
Stabilizing routes in this manner also can improve network throughput because the congestion control mechanisms of TCP do not favor environments with oscillating or rapidly changing values of round-trip time or throughput on the network.
Localization and caching represent a variation on the earlier technique of optimizing the common case. Even in today's peer-to-peer networking model, many extremely popular data repositories (such as major Web farms) still exist. By caching commonly accessed Web data (in other words, making a localized copy of this data) it is possible to save long-distance network traffic and improve performance. Such caches can form a natural part of the network hierarchy.
Finally, any network topology should be carefully analyzed during failure of the design's various components. These are usually known as failure modes. The topology should be engineered for graceful degradation. In particular, the failure of links constitutes the most common failure mode, followed by the failure of critical routing nodes.
Topologies
There are essentially four topological building blocks: rings, buses, stars, and meshes. (See Figure 4-1.) A large, well-designed network normally will exploit the benefits of each building blockeither alone or combinedat various points within its architecture.
Figure 4-1 Mesh, Star, Ring, and Bus Topologies (from top)
Although initially attractive due to minimal dependence on complex electronics, the use of bus media, such as repeated Ethernet segments, is decreasing. For the most part, this is due to an increase in the reliability and flexibility of the technology that is implementing rings, stars, and meshes. In particular, bus LAN topologies are typically converted into stars using a LAN switch. This offers increased aggregate bandwidth and superior diagnostic capabilities.
Operational experience has also shown that a passive shared broadcast medium does not necessarily create a more reliable environment because a single misbehaving Ethernet card can render a bus LAN useless for communication purposes.