Anatomy of a Network
Figure 5-2 depicts the Internet's architecture from the ISP network to the home or enterprise network.
Figure 5-2 Anatomy of a Network
Some have referred to this architecture as the dumbbell or the Q-Tip because both ends are large while the center is very skinny. This is much like the Internet's architecture. The Internet service provider's (ISP's) network is OC-12/48, Gigabit Ethernet, or 10 Gigabit Ethernet on its backbone while the interconnections with the other ISPs are many times oversubscribed smaller DS-3,OC-3, or OC-12 connections. Because of the oversubscription and smaller bandwidth sizes, these connections, known as peering points, represent the middle skinny areas. As you look to the right in figure 5-2 you can see that the next area of interconnection is between the ISP and the enterprise or home. These connections, known as the last mile connections from the incumbent local exchange carrier (ILEC) or competitive local exchange carrier (CLEC), range from 56-kbps dialup to multiple T1 (1.544-Mbps) speed. After the last mile connection is terminated, there is once again more bandwidth on the Ethernet side of the premises enterprise or home network. These Ethernet speeds range from 10 to 100 to 1000 to soon 10,000-Mbps.
Bottleneck Points
What does the connection speed data mean? It means that low bandwidth, oversubscription, and bandwidth congestion points exist where there are skinny points in the picture, at the peering points and in the last mile. Those are the areas where the congestion on the Internet exists. Most of the time, slower infrastructure is cost prohibitive. So, seeing the world as it is and taking a look at what's in place shows an Internet infrastructure with known areas of congestion and a web site/content store architecture that has ballooned in size. Many times this store is in just one location and many users are trying to access it from slower, last mile connections. What if you could alleviate the known congestion points without spending the resources to upgrade the bandwidth in the last mile or the peering points? What if you could also improve the end users' performance significantly at the same time? You can do both with a CDN.
Cisco Content Delivery Networks are the Solution
The number one barrier for e-business applications is bandwidth bottlenecks. The solution is a CDN because it gets the content to the user faster. A CDN can solve both of the bandwidth congestion problems previously discussed by sitting on top of a Layer 2/Layer 3 infrastructure while locating content near the end user and routing user requests to the best source for content delivery to the end user. Large streaming media files, streaming audio, and images are some of the file types that are being pushed close to the end user so they don't have to go over the slower connections to retrieve the data. Instead, that data can be retrieved from a faster local connection. Figure 5-3 shows what happens to the network bottlenecks when a CDN is put in place. As you can see, those bottlenecks have disappeared so that everyone benefits from newly optimized Internet applications with higher performance at a lower cost.
Figure 5-3 Demonstrating the Pushing of Content to the Edge