Case Studies
This section contains descriptions and solutions for a number of issues that do not fit easily into the main troubleshooting section.
MPLS MTU Is Too Small in the MPLS VPN Backbone
If large packets are sent across the MPLS backbone with the Don't Fragment (DF) bit set in the IP packet header, and LSR interfaces and Ethernet switches are not configured to support large labeled packets, the packets will be dropped.
In an MPLS network, link MTU sizes must take the label stack into account. In a simple MPLS VPN network without MPLS TE, a label stack depth of two is used (TDP/LDP signaled IGP label + VPN label). If MPLS traffic engineering (TE) is being used between P routers in an MPLS VPN backbone, a label stack depth of three is used (RSVP signaled TE label + TDP/LDP signaled IGP label + VPN label). And if you are using Fast Reroute with MPLS TE, that is four labels. Each label is 4 bytes, so the total size of the label stack is number of labels multiplied by 4 bytes.
In this scenario, large packets are being dropped in the MPLS VPN backbone. Figure 6-41 illustrates the customer VPN and MPLS backbone topology used in this scenario.
Figure 6-41. Customer and MPLS VPN Backbone Topology
Path MTU across the MPLS VPN backbone is verified using the extended ping vrf vrf_name command, as shown in Example 6-140.
Example 6-140 Extended ping vrf Command Output
HongKong_PE#ping vrf mjlnet_VPN Protocol [ip]: Target IP address: 172.16.4.1 Repeat count [5]: 1 Datagram size [100]: Timeout in seconds [2]: Extended commands [n]: y Source address or interface: 172.16.8.1 Type of service [0]: Set DF bit in IP header? [no]: y Validate reply data? [no]: Data pattern [0xABCD]: Loose, Strict, Record, Timestamp, Verbose[none]: Sweep range of sizes [n]: y Sweep min size [36]: 1450 Sweep max size [18024]: 1500 Sweep interval [1]: Type escape sequence to abort. Sending 51, [1450..1500]-byte ICMP Echos to 172.16.4.1, timeout is 2 seconds: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!M.M. Success rate is 92 percent (47/51), round-trip min/avg/max = 12/13/16 ms HongKong_PE#
Highlighted lines 1 and 3 show the destination and source IP addresses used with the extended ping. In this case, the source is the VRF mjlnet_VPN interface on HongKong_PE (172.16.8.1), and the destination is VRF mjlnet_VPN interface on Chengdu_PE (172.16.4.1).
Repeat count is set to 1 packet in highlighted line 2. This is the repeat count per packet size, which is set in highlighted lines 5 to 7.
In highlighted line 4, the Don't Fragment (DF) bit is set. In highlighted lines 5 to 7, a ping sweep of packet sizes 1450 to 1500 is entered. Highlighted line 8 shows that ping is successful for most packet sizes, but that as the packet size nears 1500 bytes, the pings fail.
Note that the "M" character here indicates reception of an ICMP destination unreachable message (ICMP message type 3) from a router in the path across the network. This ICMP unreachable message carries code 4, which indicates that fragmentation is required on the (ping) packet, but that the Don't Fragment bit is set.
The MPLS MTU size for backbone LSRs is examined using the show mpls forwarding-table prefix detail command.
When the MPLS MTU size is examined on Chengdu_P, it is revealed that it is too small (see Example 6-141).
Example 6-141 Verifying the MPLS MTU Size Using the show mpls forwarding-table Command
Chengdu_P#show mpls forwarding-table 10.1.1.1 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 18 Pop tag 10.1.1.1/32 1544 Fa1/0 10.20.10.1 MAC/Encaps=14/14, MTU=1500, Tag Stack{} 00049BD60C1C00D06354701C8847 No output feature configured Per-packet load-sharing Chengdu_P#
The IP address (BGP update source) of the egress PE router (Chengdu_PE) is specified in highlighted line 1. This address corresponds to the next-hop of all mjlnet_VPN site 1 routes.
Highlighted line 2 shows that the outgoing interface for this prefix is interface Fast Ethernet 1/0.
Highlighted line 3 shows that the maximum packet size that can be label switched out of interface Fast Ethernet 1/0 without being fragmented is 1500 bytes. 1500 bytes is clearly not a sufficient maximum packet size if a two-label stack (IGP + VPN) is included (1500 + 8 = 1508). Note, however, that in this case, Chengdu_P is the penultimate hop router, so it will pop the IGP labelbut it is still a very good idea to accommodate a minimum of two labels here.
Chengdu_P's interface Fast Ethernet 1/0 is then configured to support large labeled packets, using the mpls mtu command as shown in Example 6-142.
Example 6-142 Configuration of the mpls mtu Command on Interface fastethernet 1/0 on Chengdu_P
Chengdu_P#conf t Enter configuration commands, one per line. End with CNTL/Z. Chengdu_P(config)#interface fastethernet 1/0 Chengdu_P(config-if)#mpls mtu 1508 Chengdu_P(config-if)#end Chengdu_P#
The highlighted line indicates that interface fastethernet 1/0 is configured to support a label stack depth of two (1500 + [2 * 4]=1508). In this scenario, Cisco 6500 switches are being used in the POPs (in Chengdu and HongKong), so they are configured for jumbo frame support.
To enable support for jumbo frames on the Cisco 6500 series switch Ethernet ports, use the set port jumbo mod/port enable command, as shown in Example 6-143.
Example 6-143 Configuration of Jumbo Frame Support on the Cisco 6500
Chengdu_POP1> (enable) set port jumbo 3/1 enable Jumbo frames enabled on port 3/1.
By enabling support for jumbo frames, the MTU is increased to 9216 bytes for most line cards.
After the MPLS MTU on all the applicable LSRs is reconfigured and jumbo frame support on the Cisco 6500 switches is enabled, extended ping is again used to verify that 1500-byte packets can be carried across the backbone without fragmentation.
Example 6-144 shows the output of the extended ping vrf vrf_name command after support for large labeled packets has been enabled in the MPLS VPN backbone.
Example 6-144 1500-Byte Packets Can Now Be Carried Across the MPLS VPN Backbone
HongKong_PE#ping vrf mjlnet_VPN Protocol [ip]: Target IP address: 172.16.4.1 Repeat count [5]: Datagram size [100]: 1500 Timeout in seconds [2]: Extended commands [n]: y Source address or interface: 172.16.8.1 Type of service [0]: Set DF bit in IP header? [no]: y Validate reply data? [no]: Data pattern [0xABCD]: Loose, Strict, Record, Timestamp, Verbose[none]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 5, 1500-byte ICMP Echos to 172.16.4.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 12/14/16 ms HongKong_PE#
Highlighted lines 1 and 3 show the destination and source addresses of the ping packets. These are again the VRF mjlnet_VPN interface on Chengdu_PE and the VRF mjlnet_VPN interface on HongKong_PE, respectively.
In highlighted line 2, the packet size is 1500 bytes, and in highlighted line 4, the Don't Fragment (DF) bit is set.
In highlighted line 5, a success rate of 100 percent is shown.
It is also worth noting that if you are using IOS 12.0(27)S or above in your network, you can use the trace mpls MPLS Embedded Management feature command to verify the MTU that can be supported (without fragmentation) over an LSP in the MPLS backbone. This command can display the maximum receive unit (MRU, the maximum labelled packet size) at each hop across the MPLS backbone.
Summarization of PE Router Loopback Addresses Causes VPN Packets to Be Dropped
Routes to the next hops of VPN routes (the BGP updates sources on PE routers) should not be summarized in the MPLS VPN backbone; otherwise, VPN packets will be dropped.
Figure 6-42 illustrates the topology used in this scenario.
Figure 6-42. PE Router Loopback Addresses Are Summarized in the MPLS VPN Backbone
In this scenario, traffic from mjlnet_VPN site 1 transiting the MPLS VPN backbone to mjlnet_VPN site 2 is dropped.
Example 6-145 shows the output of a ping between the VRF mjlnet_VPN interface on Chengdu_PE and host 172.16.5.1 at site 2.
Example 6-145 Ping from the mjlnet_VPN Interface on Chengdu_PE to Host 172.16.5.1
Chengdu_PE#ping vrf mjlnet_VPN 172.16.5.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 172.16.5.1, timeout is 2 seconds: ..... Success rate is 0 percent (0/5) Chengdu_PE#
In highlighted line 1, a ping test is conducted between the VRF mjlnet_VPN interface on Chengdu_PE and host 172.16.5.1. As you can see, the ping test failed with a success rate of 0 percent (highlighted line 2).
Figure 6-43 shows the routing protocol configuration within the MPLS VPN backbone.
Figure 6-43. Routing Protocol Configuration with the MPLS VPN Backbone
In this scenario, OSPF is the backbone routing protocol, with Chengdu_PE, Chengdu_P, and HongKong_P in area 0, and HongKong_P and HongKong_PE in area 1.
When mjlnet_VPN packets transiting the network from mjlnet_VPN site 1 to mjlnet_VPN site 2 reach ingress PE Chengdu_PE, a two-label stack is imposed. The outer label is the IGP label (which corresponds to BGP next-hop 10.1.1.4 on egress PE router HongKong_PE), and the inner label is the VPN label.
VPN traffic is not successfully transiting the backbone. But is the problem something to do with the underlying LSP between Chengdu_PE and HongKong_PE, is it with VPN route exchange, or is it something else?
The next step is to verify the LSP between Chengdu_PE (10.1.1.1) and HongKong_PE (10.1.1.4) using the traceroute command, as shown in Example 6-146.
Example 6-146 traceroute to Egress Router HongKong_PE
Chengdu_PE#traceroute 10.1.1.4 Type escape sequence to abort. Tracing the route to 10.1.1.4 1 10.20.10.2 [MPLS: Label 17 Exp 0] 0 msec 0 msec 4 msec 2 10.20.20.2 0 msec 0 msec 4 msec 3 10.20.30.2 32 msec 0 msec * Chengdu_PE#
In highlighted line 1, Chengdu_PE imposes (IGP) label 17 to the traceroute packet and forwards it to Chengdu_P.
In highlighted line 2, Chengdu_P forwards the packet to HongKong_P. There is, however, no evidence of a label stack at all. The same is true in highlighted line 3 as the packet is forwarded from HongKong_P to HongKong_PE.
Apparently there is a problem with the LSP. But what exactly is going on here? To track the answer down, it is useful to examine the LFIB/LIBs on the routers in the path. The IGP label (for next-hop 10.1.1.4) is verified on ingress PE router Chengdu_PE using the show mpls forwarding-table prefix detail command. This command displays the LFIB.
Example 6-147 shows the LFIB entry for prefix 10.1.1.4/32 on ingress PE router Chengdu_PE.
Example 6-147 LFIB Entry for Prefix 10.1.1.4
Chengdu_PE#show mpls forwarding-table 10.1.1.4 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 18 17 10.0.0.0/8 0 Fa1/0 10.20.10.2 MAC/Encaps=14/18, MRU=1500, Tag Stack{17} 00502AFE080000049BD60C1C8847 00011000 No output feature configured Per-packet load-sharing, slots: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Chengdu_PE#
As you can see, the IGP label used for BGP next-hop 10.1.1.4 is 17 (highlighted line 2). This label corresponds to prefix 10.0.0.0/8 (highlighted line 1). This is a bit of a mystery. Why is there no label corresponding directly to 10.1.1.4/32?
The routing table is then examined using the show ip route command, as shown in Example 6-148.
Example 6-148 Global Routing Table on Chengdu_PE
Chengdu_PE#show ip route Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter area * - candidate default, U - per-user static route, o - ODR Gateway of last resort is not set 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks O 10.1.1.2/32 [110/2] via 10.20.10.2, 01:09:42, FastEthernet1/0 O 10.20.20.0/24 [110/65] via 10.20.10.2, 01:09:42, FastEthernet1/0 O IA 10.0.0.0/8 [110/66] via 10.20.10.2, 01:09:42, FastEthernet1/0 C 10.1.1.1/32 is directly connected, Loopback0 C 10.20.10.0/24 is directly connected, FastEthernet1/0 Chengdu_PE#
Highlighted line 1 shows a summary route (10.0.0.0/8). Notice the absence of a route to the loopback 0 interface on HongKong_PE (10.1.1.4/32). This is the reason that there is no label for prefix 10.1.1.4/32 in the LFIB.
The label stack for mjlnet_VPN site 2 destination 172.16.5.1 is now examined using the show mpls forwarding-table vrf vrf_name prefix detail command, as shown in Example 6-149. Note that destination 172.16.5.1 is used here only for illustrative purposes.
Example 6-149 Label Stack for mjlnet_VPN Site 2 Destination 172.16.5.1
Chengdu_PE#show mpls forwarding-table vrf mjlnet_VPN 172.16.5.1 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface None 17 172.16.5.0/24 0 Fa1/0 10.20.10.2 MAC/Encaps=14/22, MRU=1496, Tag Stack{17 21} 00502AFE080000049BD60C1C8847 0001100000015000 No output feature configured Chengdu_PE#
The highlighted portion shows the label stack (17, 21) used for mjlnet_VPN site 2 destination 172.16.5.1. The outer label (17) is the IGP label, and the inner label (21) is the VPN label.
The LFIB is then checked on Chengdu_P (the downstream LSR in the LSP to egress PE router HongKong_PE) using the show mpls forwarding-table labels label_value detail command, as shown in Example 6-150.
Example 6-150 Verifying the LFIB on Chengdu_P
Chengdu_P#show mpls forwarding-table labels 17 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 17 Pop tag 10.0.0.0/8 900 Se1/1 point2point MAC/Encaps=4/4, MRU=1504, Tag Stack{} FF030281 No output feature configured Per-packet load-sharing Chengdu_P#
The highlighted portion shows that local label 17 (the IGP label on Chengdu_PE) is popped on Chengdu_P. This means that the downstream LSR (HongKong_P) is advertising an implicit-null label for prefix 10.0.0.0/8 (the summary route).
This is verified using the show mpls ldp bindings prefix mask_length detail command on HongKong_P as shown in Example 6-151.
Example 6-151 Verifying the LIB on HongKong_P
HongKong_P#show mpls ldp bindings 10.0.0.0 8 detail tib entry: 10.0.0.0/8, rev 12 local binding: tag: imp-null Advertised to: 10.1.1.4:0 10.1.1.2:0 remote binding: tsr: 10.1.1.2:0, tag: 17 HongKong_P#
The output in Example 6-151 confirms that an implicit-null (highlighted line 1) is advertised to LSR Chengdu_P (10.1.1.2, highlighted line 2) for prefix 10.0.0.0/8.
The effect of Chengdu_P popping the IGP label is that it forwards mjlnet_VPN packets (including those for 172.16.5.1) with a label stack consisting of only the VPN label to HongKong_P.
Unfortunately, HongKong_P has no knowledge of VPN labels (only PE routers do). This is confirmed using the show mpls forwarding-table labels label_value command. Remember that the VPN label is 21 (see Example 6-149).
Example 6-152 shows the LFIB entry corresponding to label value 21.
Example 6-152 LFIB on HongKong_P
HongKong_P#show mpls forwarding-table labels 21 Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface HongKong_P#
As you can see, there is no entry. The debug mpls packets command that follows in Example 6-153 is used here to illustrate what happens to mjlnet_VPN packets as they transit HongKong_P from mjlnet_VPN site 1 to mjlnet_VPN site 2.
CAUTION
The debug mpls packet command is used here to illustrate label switching on HongKong_P. Note that you should be especially careful when using the debug mpls packets command because it can produce copious output.
Example 6-153 shows the output of the debug mpls packets command on HongKong_P.
Example 6-153 debug mpls packets Command Output on HongKong_P
HongKong_P#debug mpls packets MPLS packet debugging is on HongKong_P# 01:10:26: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21 01:10:28: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21 01:10:30: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21 01:10:32: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21 01:10:34: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=21 HongKong_P#
In highlighted line 1, a packet is received (recvd) with label 21 (the VPN label) on interface serial 1/0 (from Chengdu_P). Notice that the packet is not transmitted (xmit) onward to HongKong_PE (on interface Fast Ethernet 1/0).
Figure 6-44 illustrates label switching of packets across the MPLS VPN backbone when summarization is configured.
Figure 6-44. Label Switching Across the MPLS VPN Backbone When Summarization Is Configured
So, summary route 10.0.0.0/8 is causing VPN packets to be dropped by HongKong_P. The OSPF configuration on HongKong_P is examined using the show running-config command as shown in Example 6-154. Note that only the relevant portion of the output is shown.
Example 6-154 Configuration of OSPF on HongKong_P
HongKong_P#show running-config | begin router ospf router ospf 100 log-adjacency-changes area 1 range 10.0.0.0 255.0.0.0 passive-interface Loopback0 network 10.1.1.3 0.0.0.0 area 1 network 10.20.20.0 0.0.0.255 area 0 network 10.20.30.0 0.0.0.255 area 1 !
The highlighted line indicates the cause of the problem. Summary route 10.0.0.0/8 is configured for area 1 addresses. This summary blocks the advertisement of the route 10.1.1.4/32 (the BGP next-hop for mjlnet_VPN site 2 routes) to Chengdu_P and Chengdu_PE.
The summary route is then removed on HongKong_P, as shown in Example 6-155.
Example 6-155 The Summary Route Is Removed on Chengdu_P
HongKong_P#conf t Enter configuration commands, one per line. End with CNTL/Z. HongKong_P(config)#router ospf 100 HongKong_P(config-router)#no area 1 range 10.0.0.0 255.0.0.0 HongKong_P(config-router)#end HongKong_P#
In highlighted line 1, the summary route is removed on HongKong_P. After the summary route is removed, mjlnet_VPN traffic transits the MPLS VPN backbone successfully from site 1 to site 2.
Example 6-156 shows the output of a ping test from the VRF mjlnet_VPN interface on Chengdu_PE to host 172.16.5.1.
Example 6-156 Ping Now Succeeds Across the MPLS VPN Backbone
Chengdu_PE#ping vrf mjlnet_VPN 172.16.5.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 172.16.5.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 148/148/152 ms Chengdu_PE#
Highlighted line 1 shows a ping test from the VRF mjlnet_VPN interface on Chengdu_PE to site 2 host 172.16.5.1. Highlighted line 2 indicates the test had a 100 percent success rate.
MPLS VPN Traffic Is Dropped on TE Tunnels Between P Routers
If TE tunnels are configured between P routers in the MPLS VPN backbone, you must take care must to ensure that VPN traffic is not dropped.
Figure 6-45 shows the network topology and TE tunnel configuration used in this scenario.
Figure 6-45. Network Topology and TE Tunnel Configuration
In this scenario, a TE tunnel is configured from Chengdu_P via Shanghai_P to HongKong_P. A TE tunnel is also configured in the opposite direction from HongKong_P to Chengdu_P.
Unfortunately, when connectivity is tested from Chengdu_PE's VRF mjlnet_VPN interface to HongKong_PE's VRF mjlnet_VPN interface using ping, the success rate is 0 percent.
Examine 6-157 shows the results of a ping test from Chengdu_PE's VRF mjlnet_VPN interface to HongKong_PE's VRF mjlnet_VPN interface (172.16.8.1).
Example 6-157 Ping from Chengdu_PE's VRF mjlnet_VPN Interface to HongKong_PE's VRF mjlnet_VPN Interface Fails
Chengdu_PE#ping vrf mjlnet_VPN 172.16.8.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 172.16.8.1, timeout is 2 seconds: ..... Success rate is 0 percent (0/5) Chengdu_PE#
The next step is to verify the LSP from Chengdu_PE to HongKong_PE using traceroute, as shown in Example 6-158.
Example 6-158 Verifying the LSP Using traceroute
Chengdu_PE#traceroute 10.1.1.4 Type escape sequence to abort. Tracing the route to 10.1.1.4 1 10.20.10.2 [MPLS: Label 25 Exp 0] 4 msec 4 msec 4 msec 2 10.20.40.2 [MPLS: Label 23 Exp 0] 8 msec 4 msec 8 msec 3 10.20.50.2 4 msec 4 msec 4 msec 4 10.20.30.2 8 msec 4 msec * Chengdu_PE#
In highlighted line 1, Chengdu_PE imposes TDP/LDP signaled IGP label 25 on the packet and forwards it to Chengdu_P. Chengdu_P then swaps label 25 for (RSVP signaled TE) label 23 and forwards the packet to Shanghai_P over the TE tunnel (highlighted line 2).
Then in highlighted line 3, something strange happens: Shanghai_P forwards the packet unlabeled to HongKong_P. To track down what is happening, the LSP for mjlnet_VPN traffic is examined hop-by-hop from Chengdu_PE to HongKong_P.
The first thing to do is to verify the label stack for prefix 172.16.8.1/32 (the mjlnet_VPN interface on HongKong_PE) on Chengdu_PE using the show mpls forwarding-table vrf vrf_name detail command as shown in Example 6-159.
Example 6-159 Label Stack on Chengdu_PE for Packets to 172.16.8.1
Chengdu_PE#show mpls forwarding-table vrf mjlnet_VPN 172.16.8.1 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface None 25 172.16.8.0/24 0 Fa1/0 10.20.10.2 MAC/Encaps=14/22, MRU=1496, Tag Stack{25 34} 00D06354701C00049BD60C1C8847 0001900000022000 No output feature configured Per-packet load-sharing, slots: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Chengdu_PE#
The highlighted portion shows the label stack for mjlnet_VPN destination 172.16.8.1. The outer label is the TDP/LDP signaled IGP label (25), and the inner label is the VPN label (34).
The CEF entry for prefix 10.1.1.4/32 (the next hop for 172.16.8.1) is then examined on Chengdu_P using the show ip cef prefix detail command as shown in Example 6-160.
Example 6-160 CEF Entry for Prefix 10.1.1.4/32 on Chengdu_P
Chengdu_P#show ip cef 10.1.1.4 detail 10.1.1.4/32, version 25 0 packets, 0 bytes tag information set local tag: 25 fast tag rewrite with Tu0, point2point, tags imposed: {23} via 10.1.1.3, Tunnel0, 0 dependencies next hop 10.1.1.3, Tunnel0 valid adjacency tag rewrite with Tu0, point2point, tags imposed: {23} Chengdu_P#
Highlighted lines 1 and 2 shows that (TDP/LDP signaled IGP) label 25 is removed and outgoing (RSVP signaled TE) label 23 is imposed as packets are switched over the TE tunnel (Tu0, tunnel 0).
The label stack for VPN packets at this point is as follows: the outer label is the TE label (23), and the inner label is the VPN label (34).
The TE tunnel is routed via Shanghai_P, and so the LFIB is now examined on Shanghai_P as shown in Example 6-161.
Example 6-161 LFIB on Shanghai_P
Shanghai_P#show mpls forwarding-table label 23 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 23 Pop tag 10.1.1.2 0 [71] 14618931 Se1/1 point2point MAC/Encaps=4/4, MTU=1504, Tag Stack{} FF030281 No output feature configured Shanghai_P#
The highlighted portion shows that TE tunnel label 23 is popped on Shanghai_P. This is OK because the TE tunnel tail-end is HongKong_P (Shanghai_P is the penultimate hop for the tunnel).
Because the outer (TE) label has now been removed, the label stack at this point consists of only the VPN label (34). Packets are then forwarded to HongKong_P.
The output of the debug mpls packet command on Shanghai_P is shown in Example 6-162.
CAUTION
The debug mpls packet command is used here to illustrate label switching on Shanghai_P. Note that you should exercise extra caution when using this command because it can produce copious output and severely impact router performance.
Example 6-162 Label Switching on Shanghai_P
Shanghai_P#debug mpls packets Tagswitch packet debugging is on Shanghai_P# *Mar 1 05:06:03.562 UTC: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=23/34 *Mar 1 05:06:03.562 UTC: TAG: Se1/1: xmit: CoS=0, TTL=253, Tag(s)=34 *Mar 1 05:06:05.562 UTC: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=23/34 *Mar 1 05:06:05.562 UTC: TAG: Se1/1: xmit: CoS=0, TTL=253, Tag(s)=34 *Mar 1 05:06:07.562 UTC: TAG: Se1/0: recvd: CoS=0, TTL=254, Tag(s)=23/34 *Mar 1 05:06:07.562 UTC: TAG: Se1/1: xmit: CoS=0, TTL=253, Tag(s)=34
In highlighted line 1, you can see an mjlnet_VPN packet received on interface serial 1/0 with label stack 23/34 (the TE and VPN labels, respectively). Then in highlighted line 2, you can see that the TE label is popped, and the packet is forwarded with only VPN label 34.
When the LFIB on HongKong_P is examined using the show mpls forwarding-table label label_value command in Example 6-163, there is no entry for incoming (VPN) label 34.
Example 6-163 LFIB on HongKong_P
HongKong_P#show mpls forwarding-table label 34 detail Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface HongKong_P#
The absence of an LFIB entry for VPN label 34 is no surprise because HongKong_P is not a PE routerremember that only PE routers have knowledge of VPN labels. The upshot is that all mjlnet_VPN packets arriving on HongKong_P are dropped.
Figure 6-46 illustrates label switching of packets across the MPLS VPN backbone when LDP is not enabled on a TE tunnel configured between P routers.
Figure 6-46. Packets Are Dropped on HongKong_P
To solve this issue, you must find a solution where packets do not arrive at the TE tunnel tail-end (HongKong_P) with a label stack consisting of only the VPN label. You can resolve this issue by enabling MPLS (LDP) on the TE tunnel itself. MPLS (LDP) should be enabled on both the tunnel from Chengdu_P to HongKong_P and the tunnel from HongKong_P to Chengdu_P.
Example 6-164 shows the configuration of MPLS (LDP) on the TE tunnels between Chengdu_P and HongKong_P.
Example 6-164 Configuration of MPLS (LDP) on the TE Tunnels
! On Chengdu_P: Chengdu_P#conf t Enter configuration commands, one per line. End with CNTL/Z. Chengdu_P(config)#interface tunnel 0 Chengdu_P(config-if)#mpls ip Chengdu_P(config-if)#end Chengdu_P# ! On HongKong_P: HongKong_P#conf t Enter configuration commands, one per line. End with CNTL/Z. HongKong_P(config)#interface tunnel 0 HongKong_P(config-if)#mpls ip HongKong_P(config-if)#end HongKong_P#
Once MPLS has been enabled on the TE tunnels, Chengdu_P and HongKong_P discover each other over the TE tunnel (via LDP neighbor discovery). Crucially, this means that the TDP/LDP signaled IGP label is swapped instead of removed from mjlnet_VPN packets as they enter the TE tunnel.
When packets reach TE tunnel tail-end HongKong_P, it pops the IGP label (because of penultimate hop popping, requested by HongKong_PE) and forwards the packets to HongKong_PE with a label stack consisting only of the VPN label.
The path is now verified from Chengdu_PE's VRF mjlnet_VPN interface to HongKong_PE's VRF mjlnet_VPN interface using VRF traceroute, as shown in Example 6-165.
Example 6-165 traceroute Succeeds from Chengdu_PE's VRF mjlnet_VPN Interface to HongKong_PE's VRF mjlnet_VPN Interface
Chengdu_PE# traceroute Vrf mjlnet_VPN Protocol [ip]: Target IP address: 172.16.8.1 Source address: 172.16.4.1 Numeric display [n]: Timeout in seconds [3]: Probe count [3]: Minimum Time to Live [1]: Maximum Time to Live [30]: Port Number [33434]: Loose, Strict, Record, Timestamp, Verbose[none]: Type escape sequence to abort. Tracing the route to 172.16.8.1 1 10.20.10.2 [MPLS: Labels 25/34 Exp 0] 4 msec 4 msec 4 msec 2 10.20.40.2 [MPLS: Labels 23/16/34 Exp 0] 4 msec 4 msec 4 msec 3 10.20.50.2 [MPLS: Labels 16/34 Exp 0] 4 msec 4 msec 4 msec 4 172.16.8.1 0 msec 0 msec * Chengdu_PE#
In highlighted line 1, Chengdu_PE imposes label stack 25/34 (TDP/LDP signaled IGP and VPN label, respectively) on the packet and forwards it to Chengdu_P.
In highlighted line 2, Chengdu_P swaps IGP label 25 for IGP TDP/LP signaled label 16 and additionally imposes RSVP signaled TE label 23. VPN label 34 is unmodified. The packet is then forwarded over the TE tunnel to Shanghai_P.
Because the next hop, HongKong_P, is the tunnel tail-end, Shanghai_P pops the TE label. IGP label 16 and VPN label 34 are unmodified (highlighted line 3). The packet is then forwarded to HongKong_P.
HongKong_P then pops IGP label 16 because it is the penultimate hop. The VPN label is unmodified (not shown), and the packet is forwarded to HongKong_PE. VPN traffic is now successfully transiting the TE tunnel between Chengdu_P and HongKong_P.
Figure 6-47 illustrates label switching of packets across the MPLS VPN backbone when TE tunnels are configured between P routers with MPLS (LDP) enabled.
Figure 6-47. Label Switching Across the MPLS VPN Backbone When LDP Is Enabled over TE Tunnels Between P Routers
MVPN Fails When TE Tunnels Are Configured in the MPLS VPN Backbone
When configuring MVPN in a MPLS backbone with TE tunnels, the IGP (IS-IS or OSPF) must be configured to ensure that Reverse Path Forwarding (RPF) checks for multicast traffic do not fail.
In this scenario, MVPN is configured for VRF mjlnet_VPN on PE routers Chengdu_PE, HongKong_PE, and Shanghai_PE. Additionally, TE tunnels are configured between P routers Chengdu_P, HongKong_P, and Shanghai_P. These TE tunnels are configured with autoroute. The backbone is configured for PIM sparse mode, with the Rendezvous Point (RP) on Chengdu_P.
There is a multicast server at mjlnet_VPN site 1 and multicast receivers at mjlnet_VPN sites 2 and 3. Unfortunately, the receivers are unable to receive any multicast traffic from the multicast server.
Figure 6-48 illustrates the MVPN and TE configuration in the MPLS VPN backbone.
Figure 6-48. MVPN and TE Configuration in the MPLS VPN Backbone
The first step in troubleshooting this issue is to check whether PE routers Chengdu_PE, HongKong_PE, and Shanghai_PE are correctly advertising participation in the MPVN to each other. Each PE router is checked in turn using the show ip pim mdt bgp command, as shown in Example 6-166.
Example 6-166 show ip pim mdt bgp Command Output on the PE Routers
! On Chengdu_PE: Chengdu_PE#show ip pim mdt bgp Peer (Route Distinguisher + IPv4) Next Hop MDT group 239.0.0.1 2:64512:100:10.1.1.4 10.1.1.4 2:64512:100:10.1.1.6 10.1.1.6 Chengdu_PE# ! On HongKong_PE: HongKong_PE#show ip pim mdt bgp Peer (Route Distinguisher + IPv4) Next Hop MDT group 239.0.0.1 2:64512:100:10.1.1.1 10.1.1.1 2:64512:100:10.1.1.6 10.1. 1.6 HongKong_PE# ! On Shanghai_PE: Shanghai_PE#show ip pim mdt bgp Peer (Route Distinguisher + IPv4) Next Hop MDT group 239.0.0.1 2:64512:100:10.1.1.1 10.1.1.1 2:64512:100:10.1.1.4 10.1.1.4 Shanghai_PE#
Highlighted line 1 shows the default MDT address (239.0.0.1). Highlighted lines 2 and 3 shows that Chengdu_PE has received advertisements signaling participation in the MVPN from both HongKong_PE (10.1.1.4), and Shanghai_PE (10.1.1.6). Note the RD used here (2:ASN:XX). This is a type 2 RD used to advertise MPVN participation.
As you can see, HongKong_PE has received advertisements from Chengdu_PE and Shanghai_PE (10.1.1.1 and 10.1.1.6). Shanghai_PE has received advertisements from Chengdu_PE and HongKong_PE (10.1.1.1 and 10.1.1.4). Participation in the MVPN is being correctly advertised between the PE routers.
Starting at Chengdu_PE, the multicast state for the default MDT is then verified using the show ip mroute command, as shown in Example 6-167.
Example 6-167 Verifying the Multicast State for Chengdu_PE
Chengdu_PE#show ip mroute 239.0.0.1 IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel, Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.0.0.1), 07:28:15/stopped, RP 10.1.1.2, flags: SJCFZ Incoming interface: FastEthernet1/0, RPF nbr 10.20.10.2 Outgoing interface list: MVRF mjlnet_VPN, Forward/Sparse-Dense, 07:28:15/00:00:00 (10.1.1.1, 239.0.0.1), 07:14:24/00:02:53, flags: PFTZ Incoming interface: Loopback0, RPF nbr 0.0.0.0 Outgoing interface list: Null Chengdu_PE#
Highlighted line 1 shows that a (*, G) entry has been created for the default MDT group address (*, 239.0.0.1). An (S, G) entry for source 10.1.1.1 (Chengdu_PE itself) is shown in highlighted line 2. Notice that the P (Pruned) flag is set. Finally, in highlighted line 3, you can see that the outgoing interface list for source 10.1.1.1 is null. This is consistent with the fact that the P flag is set. Also note the Z flag, which indicates that this entry corresponds to a multicast tunnel.
Two things are wrong here:
- (S, G) entries should exist for sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE).
- The outgoing interface list should be populated for source 10.1.1.1.
The next thing to check is whether Chengdu_PE has discovered PIM neighbors (specifically, Chengdu_P).
To verify PIM neighbor discovery on Chengdu_PE, use the show ip pim neighbor command as shown in Example 6-168.
Example 6-168 Verifying PIM Neighbor Discovery
Chengdu_PE#show ip pim neighbor PIM Neighbor Table Neighbor Interface Uptime/Expires Ver DR Address Priority/Mode 10.20.10.2 FastEthernet1/0 02:27:46/00:01:43 v2 N / DR Chengdu_PE#
The highlighted line shows that Chengdu_PE has discovered one neighbor (10.20.10.2)Chengdu_P, which is the Rendezvous Point (RP) in the backbone network.
Moving on to Chengdu_P, the multicast state for the default MDT (group 239.0.0.1) is again checked using the show ip mroute command, as shown in Example 6-169.
Example 6-169 Multicast State for the Default MDT on Chengdu_PE
Chengdu_P#show ip mroute 239.0.0.1 IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT M - MSDP created entry, X - Proxy Join Timer Running A - Candidate for MSDP Advertisement Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.0.0.1), 02:40:37/00:03:06, RP 10.1.1.2, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse-Dense, 02:40:32/00:03:06 Serial1/0, Forward/Sparse-Dense, 02:19:01/now Serial1/1, Forward/Sparse-Dense, 01:57:48/now (10.1.1.1, 239.0.0.1), 02:40:25/00:01:27, flags: PT Incoming interface: FastEthernet1/0, RPF nbr 10.20.10.1 Outgoing interface list: Null (10.1.1.4, 239.0.0.1), 02:19:13/00:02:29, flags: T Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse-Dense, 02:19:13/00:02:36 (10.1.1.6, 239.0.0.1), 02:40:06/00:02:36, flags: T Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse-Dense, 02:40:06/00:02:34 Chengdu_P#
Things now get very mysterious. Highlighted line 1 shows the (*, G) entry for the default MDT: (*, 239.0.0.1).
Highlighted line 2 shows the (S, G) entry for source 10.1.1.1: Chengdu_PE. Notice that the P flag is set. This indicates that the outgoing interface list should be null. The incoming interface is Fast Ethernet 1/0, and the RPF neighbor is 10.20.10.1. This is logical because 10.20.10.1 is Chengdu_PE. Again, the outgoing interface list for this source is, as expected, null. This is no good because it should contain the interfaces serial 1/0 and serial 1/1 (toward HongKong_P and Shanghai_P, respectively).
Highlighted lines 3 and 4 show the (S, G) entries for sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE). Notice that the incoming interface is null, and the RPF neighbor is 0.0.0.0. This is very strange. The incoming interface for 10.1.1.4 should be serial 1/0, and the RPF neighbor should be 10.1.1.3 (HongKong_P). Similarly, the incoming interface for 10.1.1.6 should be serial 1/1, and the RPF neighbor 10.1.1.5 (Shanghai_P).
Next, PIM neighbor discovery is checked on Chengdu_P using the show ip pim neighbor command as shown in Example 6-170.
Example 6-170 PIM Neighbor Discovery Is Checked on Chengdu_P
Chengdu_P#show ip pim neighbor PIM Neighbor Table Neighbor Address Interface Uptime Expires Ver Mode 10.20.10.1 FastEthernet0/0 03:12:17 00:01:25 v2 10.20.20.2 Serial1/0 02:50:47 00:01:15 v2 10.20.40.2 Serial1/1 02:29:29 00:01:19 v2 Chengdu_P#
The highlighted lines show that HongKong_P (10.20.20.2) and Shanghai_P (10.20.40.2) are PIM neighbors, which is as it should be.
The debug ip pim command is then used to check which PIM messages are being received by Chengdu_P from the PE routers, as shown in Example 6-171 (note that only the relevant portions of the output are shown).
Example 6-171 debug ip pim Command Output on Chengdu_P
Chengdu_P#debug ip pim PIM debugging is on Chengdu_P# 04:35:56: PIM: Received v2 Register on Serial1/0 from 10.20.30.2 04:35:56: (Data-header) for 10.1.1.4, group 239.0.0.1 04:35:56: PIM: RPF lookup failed to source 10.1.1.4 04:36:03: PIM: Received v2 Register on Serial1/1 from 10.20.60.2 04:36:03: (Data-header) for 10.1.1.6, group 239.0.0.1 04:36:03: PIM: RPF lookup failed to source 10.1.1.6 Chengdu_P#
Here are some answers. Highlighted lines 1 to 3 show that HongKong_PE (10.1.1.4) is sending PIM Register messages to Chengdu_P (remember that Chengdu_P is the RP). HongKong_PE is trying to notify Chengdu_P that it is an active source for the default MDT (239.0.0.1). Unfortunately, the RPF check on the encapsulated multicast packet fails. Highlighted lines 4 to 6 show that exactly the same thing is happening for Shanghai_PE (10.1.1.6).
The RPF check state on Chengdu_P can also be verified for sources 10.1.1.4 and 10.1.1.6 using the show ip rpf command, as shown in Example 6-172.
Example 6-172 show ip rpf Command Output on Chengdu_P
! For HongKong_PE (10.1.1.4): Chengdu_P#show ip rpf 10.1.1.4 RPF information for ? (10.1.1.4) failed, no route exists Chengdu_P# ! For Shanghai_PE (10.1.1.6): Chengdu_P#show ip rpf 10.1.1.6 RPF information for ? (10.1.1.6) failed, no route exists Chengdu_P#
The highlighted lines show that the RPF fails for both source 10.1.1.4 and 10.1.1.6 because no route exists for these sources.
This is verified using the show ip route command as shown in Example 6-173.
Example 6-173 show ip route Command Output
! For source 10.1.1.4 (HongKong_PE): Chengdu_P#show ip route 10.1.1.4 Routing entry for 10.1.1.4/32 Known via "isis", distance 115, metric 20, type level-2 Redistributing via isis Last update from 10.1.1.3 on Tunnel20, 03:34:51 ago Routing Descriptor Blocks: * 10.1.1.3, from 10.1.1.4, via Tunnel20 Route metric is 20, traffic share count is 1 Chengdu_P# ! For source 10.1.1.6 (Shanghai_PE): Chengdu_P#show ip route 10.1.1.6 Routing entry for 10.1.1.6/32 Known via "isis", distance 115, metric 20, type level-2 Redistributing via isis Last update from 10.1.1.5 on Tunnel10, 03:13:46 ago Routing Descriptor Blocks: * 10.1.1.5, from 10.1.1.6, via Tunnel10 Route metric is 20, traffic share count is 1 Chengdu_P#
Highlighted line 1 reveals that there is in fact a route to 10.1.1.4 /32 via interface tunnel 20 (a TE tunnel). Similarly, highlighted line 2 shows that there is a route to 10.1.1.6/32 via interface tunnel 10. Why does the show ip rpf command not show a route?
The answer is that multicast packets (Register messages, in this case) are received on interfaces serial 1/0 and serial 1/1, but the unicast routes back to the source of the packets are via the TE tunnels. This causes the RPF check failure. When the RPF check fails, multicast packets are dropped.
You might think that the solution to this problem is to enable PIM on the TE tunnels. Unfortunately, TE tunnels are unidirectional, so that will not work.
The answer is to allow unicast traffic to be forwarded over the TE tunnels, while ensuring that multicast uses the physical interfaces. This can be achieved by configuring the mpls traffic-eng multicast-intact command on the head-end router of each TE tunnel. This command can be configured under either the IS-IS or OSPF process, depending on which IGP you are using. This command is enabled on Chengdu_P, HongKong_P, and Shanghai_P (the TE tunnel head-ends) as shown in Example 6-174.
Example 6-174 Configuration of mpls traffic-eng multicast intact on Chengdu_P
Chengdu_P#conf t Enter configuration commands, one per line. End with CNTL/Z. Chengdu_P(config)#router isis Chengdu_P(config-router)#mpls traffic-eng multicast-intact Chengdu_P(config-router)#end Chengdu_P#
In Example 6-174, mpls traffic-eng multicast-intact is enabled on Chengdu_P. This command is similarly enabled on HongKong_P and Shanghai_P.
RPF information for sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE) is now rechecked on Chengdu_P using the show ip rpf command, as shown in Example 6-175.
Example 6-175 Rechecking RPF Information for Sources 10.1.1.4 (HongKong_PE) and 10.1.1.6 (Shanghai_PE)
! For 10.1.1.4 (HongKong_PE): Chengdu_P#show ip rpf 10.1.1.4 RPF information for ? (10.1.1.4) RPF interface: Serial1/0 RPF neighbor: ? (10.20.20.2) RPF route/mask: 10.1.1.4/32 RPF type: unicast (isis) RPF recursion count: 0 Doing distance-preferred lookups across tables Chengdu_P# ! For 10.1.1.6 (Shanghai_PE): Chengdu_P#show ip rpf 10.1.1.6 RPF information for ? (10.1.1.6) RPF interface: Serial1/1 RPF neighbor: ? (10.20.40.2) RPF route/mask: 10.1.1.6/32 RPF type: unicast (isis) RPF recursion count: 0 Doing distance-preferred lookups across tables Chengdu_P#
As you can see, the RPF check for source 10.1.1.4 (HongKong_PE) now uses interface serial 1/0, and the RPF check for source 10.1.1.6 (Shanghai_PE) now uses interface serial 1/1.
The multicast routing table is then examined using the show ip mroute command, as shown in Example 6-176.
Example 6-176 Examining the Multicast Routing Table
Chengdu_P#show ip mroute 239.0.0.1 IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT M - MSDP created entry, X - Proxy Join Timer Running A - Candidate for MSDP Advertisement Outgoing interface flags: H - Hardware switched Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.0.0.1), 00:19:55/00:03:29, RP 10.1.1.2, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse-Dense, 00:19:26/00:02:44 Serial1/0, Forward/Sparse-Dense, 00:07:51/00:03:29 Serial1/1, Forward/Sparse-Dense, 00:07:44/00:02:40 (10.1.1.1, 239.0.0.1), 00:19:41/00:02:58, flags: T Incoming interface: FastEthernet1/0, RPF nbr 10.20.10.1 Outgoing interface list: Serial1/0, Forward/Sparse-Dense, 00:07:44/00:02:40 Serial1/1, Forward/Sparse-Dense, 00:07:51/00:03:29 (10.1.1.4, 239.0.0.1), 00:08:31/00:03:25, flags: T Incoming interface: Serial1/0, RPF nbr 10.20.20.2 Outgoing interface list: FastEthernet1/0, Forward/Sparse-Dense, 00:08:33/00:02:42 Serial1/1, Forward/Sparse-Dense, 00:07:46/now (10.1.1.6, 239.0.0.1), 00:08:45/00:03:28, flags: T Incoming interface: Serial1/1, RPF nbr 10.20.40.2 Outgoing interface list: FastEthernet1/0, Forward/Sparse-Dense, 00:08:45/00:02:42 Chengdu_P#
Highlighted line 1 shows the (*, G) entry for the default MDT (239.0.0.1). In highlighted line 2, the (S, G) entry for source 10.1.1.1 (Chengdu_PE) is shown. The incoming interface is now FastEthernet1/0, and the RPF neighbor is 10.20.10.1 (Chengdu_PE). The outgoing interface list is now Serial1/1 (toward Shanghai_PE) and Serial1/0 (toward HongKong_PE).
Highlighted line 3 shows the (S, G) entry for source 10.1.1.4 (HongKong_PE). The incoming interface is Serial1/0. The RPF neighbor is 10.20.20.2 (HongKong_P). The outgoing interface list is Serial1/1 and FastEthernet1/0.
Finally, the entry for source 10.1.1.6 (Shanghai_PE) is shown in highlighted line 4. The incoming interface is Serial1/1. The RPF neighbor is 10.20.40.2 (Shanghai_P). The outgoing interface list is FastEthernet1/0.
Default MDT traffic is now being forwarded correctly across the MPLS VPN backbone, and multicast traffic from the server at site 1 is now being received by multicast receivers at sites 2 and 3.