Virtual Extensible LAN (VXLAN) Overview
In partnership with other leading vendors, Cisco proposed the VXLAN standard to the Internet Engineering Task Force (IETF) as a solution to the data center network challenges posed by the traditional VLAN technology. The VXLAN standard provides for flexible workload placement and the higher scalability of Layer 2 segmentation that is required by modern application demands. VXLAN is an extension to the Layer 2 VLAN. It was designed to provide the same VLAN functionality with greater extensibility and flexibility. VXLAN offers the following benefits:
VLAN flexibility in multitenant segments: It provides a solution to extend Layer 2 segments over the underlying network infrastructure so that tenant workload can be placed across physical pods in the data center.
Higher scalability: VXLAN uses a 24-bit segment ID known as the VXLAN network identifier (VNID), which enables up to 16 million VXLAN segments to coexist in the same administrative domain.
Improved network utilization: VXLAN solved Layer 2 STP limitations. VXLAN packets are transferred through the underlying network based on its Layer 3 header and can take complete advantage of Layer 3 routing, equal-cost multipath (ECMP) routing, and link aggregation protocols to use all available paths.
VXLAN Encapsulation and Packet Format
VXLAN is a solution to support a flexible, large-scale multitenant environment over a shared common physical infrastructure. The transport protocol over the physical data center network is IP plus UDP.
VXLAN defines a MAC-in-UDP encapsulation scheme where the original Layer 2 frame has a VXLAN header added and is then placed in a UDP-IP packet. With this MAC-in-UDP encapsulation, VXLAN tunnels the Layer 2 network over the Layer 3 network. The VXLAN packet format is shown in Figure 3-1.
Figure 3-1 VXLAN Packet Format
As shown in Figure 3-1, VXLAN introduces an 8-byte VXLAN header that consists of a 24-bit VNID and a few reserved bits. The VXLAN header together with the original Ethernet frame goes in the UDP payload. The 24-bit VNID is used to identify Layer 2 segments and to maintain Layer 2 isolation between the segments. With all 24 bits in VNID, VXLAN can support 16 million LAN segments.
VXLAN Tunnel Endpoint
VXLAN uses the VXLAN tunnel endpoint (VTEP) to map tenants’ end devices to VXLAN segments and to perform VXLAN encapsulation and decapsulation. Each VTEP function has two interfaces: one is a switch interface on the local LAN segment to support local endpoint communication, and the other is an IP interface to the transport IP network.
Infrastructure VLAN is a unique IP address that identifies the VTEP device on the transport IP network. The VTEP device uses this IP address to encapsulate Ethernet frames and transmits the encapsulated packets to the transport network through the IP interface.
A VTEP device also discovers the remote VTEPs for its VXLAN segments and learns remote MAC Address-to-VTEP mappings through its IP interface. The functional components of VTEPs and the logical topology that is created for Layer 2 connectivity across the transport IP network are shown in Figure 3-2.
Figure 3-2 VXLAN Tunnel Endpoint (VTEP)
The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP as the destination IP address.
Virtual Network Identifier
A virtual network identifier (VNI) is a value that identifies a specific virtual network in the data plane. It is typically a 24-bit value part of the VXLAN header, which can support up to 16 million individual network segments. (Valid VNI values are from 4096 to 16,777,215.) There are two main VNI scopes:
Network-wide scoped VNIs: The same value is used to identify the specific Layer 3 virtual network across all network edge devices. This network scope is useful in environments such as within the data center where networks can be automatically provisioned by central orchestration systems.
Having a uniform VNI per VPN is a simple approach, while also easing network operations (such as troubleshooting). It also means simplified requirements on network edge devices, both physical and virtual devices. A critical requirement for this type of approach is to have a very large number of network identifier values given the network-wide scope.
Locally assigned VNIs: In an alternative approach supported as per RFC 4364, the identifier has local significance to the network edge device that advertises the route. In this case, the virtual network scale impact is determined on a per-node basis versus a network basis.
When it is locally scoped and uses the same existing semantics as an MPLS VPN label, the same forwarding behaviors as specified in RFC 4364 can be employed. This scope thus allows a seamless stitching together of a VPN that spans both an IP-based network overlay and an MPLS VPN.
This situation can occur, for instance, at the data center edge where the overlay network feeds into an MPLS VPN. In this case, the identifier may be dynamically allocated by the advertising device.
It is important to support both cases and, in doing so, ensure that the scope of the identifier be clear and the values not conflict with each other.
VXLAN Control Plane
Two widely adopted control planes are used with VXLAN: the VXLAN Flood and Learn Multicast-Based Control Plane and the VXLAN MPBGP EVPN Control Plane.
VXLAN Flood and Learn Multicast-Based Control Plane
Cisco Nexus switches utilize existing Layer 2 flooding mechanisms and dynamic MAC address learning to
Transport broadcast, unknown unicast, and multicast (BUM) traffic
Discover remote VTEPs
Learn remote-host MAC addresses and MAC-to-VTEP mappings for each VXLAN segment
IP multicast is used to reduce the flooding scope of the set of hosts that are participating in the VXLAN segment. Each VXLAN segment, or VNID, is mapped to an IP multicast group in the transport IP network. Each VTEP device is independently configured and joins this multicast group as an IP host through the Internet Group Management Protocol (IGMP). The IGMP joins trigger Protocol Independent Multicast (PIM) joins and signaling through the transport network for the particular multicast group. The multicast distribution tree for this group is built through the transport network based on the locations of participating VTEPs. The multicast tunnel of a VXLAN segment through the underlying IP network is shown in Figure 3-3.
Figure 3-3 VXLAN Multicast Group in Transport Network
The multicast group shown in Figure 3-4 is used to transmit VXLAN broadcast, unknown unicast, and multicast traffic through the IP network, limiting Layer 2 flooding to those devices that have end systems participating in the same VXLAN segment. VTEPs communicate with one another through the flooded or multicast traffic in this multicast group.
Figure 3-4 VXLAN Multicast Control Plane
As an example, if End System A wants to talk to End System B, it does the following:
End System A generates an ARP request trying to discover the End System B MAC address.
When the ARP request arrives at SW1, it will look up its local table, and if an entry is not found, it will encapsulate the ARP request over VXLAN and send it over the multicast group configured for the specific VNI.
The multicast RP receives the packet, and it forwards a copy to every VTEP that has joined the multicast group.
Each VTEP receives and deencapsulates the packet VXLAN packet and learns the System A MAC address pointing to the remote VTEP address.
Each VTEP forwards the ARP request to its local destinations.
End System B generates the ARP reply. When SW2 VTEP2 receives it, it looks up its local table and finds an entry with the information that traffic destined to End System A 180 must be sent to VTEP1 address. VTEP2 encapsulates the ARP reply with a VXLAN header and unicasts it to VTEP1.
VTEP1 receives and deencapsulates the packet and delivers it to End System A.
When the MAC address information is learned, additional packets are fed to the corresponding VTEP address.
VXLAN MPBGP EVPN Control Plane
The EVPN overlay specifies adaptations to the BGP MPLS-based EVPN solution so that it is applied as a network virtualization overlay with VXLAN encapsulation where
The PE node role described in BGP MPLS EVPN is equivalent to the VTEP/network virtualization edge (NVE) device.
VTEP information is distributed via BGP.
VTEPs use control plane learning/distribution via BGP for remote MAC addresses instead of data plane learning.
Broadcast, unknown unicast, and multicast (BUM) data traffic is sent using a shared multicast tree.
A BGP route reflector (RR) is used to reduce the full mesh of BGP sessions among VTEPs to a single BGP session between a VTEP and the RR.
Route filtering and constrained route distribution are used to ensure that the control plane traffic for a given overlay is distributed only to the VTEPs that are in that overlay instance.
The host (MAC) mobility mechanism ensures that all the VTEPs in the overlay instance know the specific VTEP associated with the MAC.
Virtual network identifiers (VNIs) are globally unique within the overlay.
The EVPN overlay solution for VXLAN can also be adapted to enable it to be applied as a network virtualization overlay with VXLAN for Layer 3 traffic segmentation. The adaptations for Layer 3 VXLAN are similar to L2 VXLAN, except the following:
VTEPs use control plane learning/distribution via BGP of IP addresses (instead of MAC addresses).
The virtual routing and forwarding instances are mapped to the VNI.
The inner destination MAC address in the VXLAN header does not belong to the host but to the receiving VTEP that does the routing of the VXLAN payload. This MAC address is distributed via the BGP attribute along with EVPN routes.
VXLAN Gateways
VXLAN gateways are used to connect VXLAN and classic VLAN segments to create a common forwarding domain so that tenant devices can reside in both environments. The types of VXLAN gateways are
Layer 2 Gateway: A Layer 2 VXLAN gateway is a device that encapsulates a classical Ethernet (CE) frame into a VXLAN frame and decapsulates a VXLAN frame into a CE frame. A gateway device transparently provides VXLAN benefits to a device that does not support VXLAN; that device could be a physical host or a virtual machine. The physical hosts or VMs are completely unaware of the VXLAN encapsulation.
VXLAN Layer 3 Gateway: Similar to traditional routing between different VLANs, a VXLAN router is required for communication between devices that are in different VXLAN segments. The VXLAN router translates frames from one VNI to another. Depending on the source and destination, this process might require decapsulation and re-encapsulation of a frame. The Cisco Nexus device supports all combinations of decapsulation, route, and encapsulation. The routing can also be done across native Layer 3 interfaces and VXLAN segments.
You can enable VXLAN routing at the aggregation layer or on Cisco Nexus device aggregation nodes. The spine forwards only IP-based traffic and ignores the encapsulated packets. To help scaling, a few leaf nodes (a pair of border leaves) perform routing between VNIs. A set of VNIs can be grouped into a virtual routing and forwarding (VRF) instance (tenant VRF) to enable routing among those VNIs. If routing must be enabled among a large number of VNIs, you might need to split the VNIs between several VXLAN routers. Each router is responsible for a set of VNIs and a respective subnet. Redundancy is achieved with FHRP.
VXLAN High Availability
For high availability, a pair of virtual port channel (vPC) switches can be used as a logical VTEP device sharing an anycast VTEP address (shown in Figure 3-5).
Figure 3-5 VXLAN High Availability
The vPC switches provide vPCs for redundant host connectivity while individually running Layer 3 protocols with the upstream devices in the underlay network. Both will join the multicast group for the same VXLAN VNI and use the same anycast VTEP address as the source to send VXLAN-encapsulated packets to the devices in the underlay network, including the multicast rendezvous point and the remote VTEP devices. The two vPC VTEP switches appear to be one logical VTEP entity.
vPC peers must have the following identical configurations:
Consistent mapping of the VLAN to the virtual network segment (VN-segment)
Consistent NVE binding to the same loopback secondary IP address (anycast VTEP address)
Consistent VNI-to-group mapping
For the anycast IP address, vPC VTEP switches must use a secondary IP address on the loopback interface bound to the VXLAN NVE tunnel. The two vPC switches need to have the exact same secondary loopback IP address.
Both devices will advertise this anycast VTEP address on the underlay network so that the upstream devices learn the /32 route from both vPC VTEPs and can load-share VXLAN unicast-encapsulated traffic between them.
In the event of vPC peer-link failure, the vPC operational secondary switch will shut down its loopback interface bound to VXLAN NVE. This shutdown will cause the secondary vPC switch to withdraw the anycast VTEP address from its IGP advertisement so that the upstream devices in the underlay network start to send all traffic just to the primary vPC switch. The purpose of this process is to avoid a vPC active-active situation when the peer link is down. With this mechanism, the orphan devices connected to the secondary vPC switch will not be able to receive VXLAN traffic when the vPC peer link is down.
VXLAN Tenant Routed Multicast
Tenant Routed Multicast (TRM) brings the efficiency of multicast delivery to VXLAN overlays. It is based on standards-based next-gen control plane (ngMVPN) described in IETF RFCs 6513 and 6514. TRM enables the delivery of customer Layer 3 multicast traffic in a multitenant fabric, and this in an efficient and resilient manner.
While BGP EVPN provides a control plane for unicast routing, as shown in Figure 3-6, ngMVPN provides scalable multicast routing functionality. It follows an “always route” approach where every edge device (VTEP) with distributed IP Anycast Gateway for unicast becomes a designated router (DR) for multicast. Bridged multicast forwarding is present only on the edge devices (VTEP) where IGMP snooping optimizes the multicast forwarding to interested receivers. All other multicast traffic beyond local delivery is efficiently routed.
Figure 3-6 Tenant Routed Multicast (TRM)
With TRM enabled, multicast forwarding in the underlay is leveraged to replicate VXLAN-encapsulated routed multicast traffic. A Default Multicast Distribution Tree (Default-MDT) is built per VRF. This is an addition to the existing multicast groups for Layer 2 VNI broadcast, unknown unicast, and Layer 2 multicast replication group. The individual multicast group addresses in the overlay are mapped to the respective underlay multicast address for replication and transport. The advantage of using a BGP-based approach is that TRM can operate as a fully distributed overlay rendezvous point (RP), with the RP presence on every edge device (VTEP).
A multicast-enabled data center fabric is typically part of an overall multicast network. Multicast sources, receivers, and even the multicast RP might reside inside the data center but might also be inside the campus or externally reachable via the WAN. TRM allows seamless integration with existing multicast networks. It can leverage multicast RPs external to the fabric. Furthermore, TRM allows for tenant-aware external connectivity using Layer 3 physical interfaces or subinterfaces.
VXLAN Configurations and Verifications
VXLAN requires a license. Table 3-2 shows the NX-OS feature license required for VXLAN. For more information, visit the Cisco NX-OS Licensing Guide.
<Table 3-2 VXLAN Feature-Based Licenses for Cisco NX-OS
Platform | Feature License | Feature Name |
---|---|---|
Cisco Nexus 9000 Series switches | LAN_ENTERPRISE_SERVICES_PK | Cisco programmable fabric spine, leaf, or border leaf |
Tables 3-3 through 3-6 show the most-used VXLAN configuration commands along with their purpose. For full commands, refer to the Nexus VXLAN Configuration Guide.
Table 3-3 VXLAN Global-Level Commands
Command | Purpose |
---|---|
feature nv overlay | Enables the VXLAN feature. |
feature vn-segment-vlan-based | Configures the global mode for all VXLAN bridge domains. |
vlan vlan-id | Specifies VLAN. |
vn-segment vnid | Specifies VXLAN virtual network identifier (VNID). |
bridge-domain domain | Enters the bridge domain configuration mode. It will create a bridge domain if it does not yet exist. Use from the global configuration mode. |
dot1q vlan vni vni | Creates mapping between VLAN and VNI. Use from the encapsulation profile configuration mode. |
encapsulation profile name_of_profile default | Applies an encapsulation profile to a service profile. Use from the service instance configuration mode. |
encapsulation profile vni name_of_profile | Creates an encapsulation profile. Use from the global configuration mode. |
service instance instance vni | Creates a service instance. Use from the interface configuration mode. |
interface nve x | Creates a VXLAN overlay interface that terminates VXLAN tunnels. |
mac address-table static mac-address vni vni-id interface nve x peer-ip ip-address | Specifies the MAC address pointing to the remote VTEP. NOTE: Only 1 NVE interface is allowed on the switch. |
ip igmp snooping vxlan | Enables IGMP snooping for VXLAN VLANs. You have to explicitly configure this command to enable snooping for VXLAN VLANs. |
ip igmp snooping disable-nve-static-router-port | Configures IGMP snooping over VXLAN so that it does not include NVE as a static multicast router (mrouter) port using this global CLI command. The NVE interface for IGMP snooping over VXLAN is the mrouter port by default. |
Table 3-4 Interface-Level Commands
Command | Purpose |
---|---|
switchport vlan mapping enable | Enables VLAN translation on the switch port. VLAN translation is disabled by default. NOTE: Use the no form of this command to disable VLAN translation. |
switchport vlan mapping vlan-id translated-vlan-id | Translates a VLAN to another VLAN. The range for both the vlan-id and translated-vlan-id arguments is from 1 to 4094.
NOTE: Use the no form of this command to clear the mappings between a pair of VLANs. |
switchport vlan mapping all | Removes all VLAN mappings configured on the interface. |
Table 3-5 Network Virtual Interface (NVE) Config Commands
Command | Purpose |
---|---|
source-interface src-if | The source interface must be a loopback interface that is configured on the switch with a valid /32 IP address. The transient devices in the transport network and the remote VTEPs must know this /32 IP address. This is accomplished by advertising it through a dynamic routing protocol in the transport network. |
member vni vni | Associates VXLAN virtual network identifiers (VNIs) with the NVE interface. |
mcast-group start-address [end-address] | Assigns a multicast group to the VNIs. NOTE: Used only for BUM traffic. |
ingress-replication protocol bgp | Enables BGP EVPN with ingress replication for the VNI. |
ingress-replication protocol static | Enables static ingress replication for the VNI. |
peer-ip n.n.n.n | Enables peer IP for static ingress-replication protocol. |
Table 3-6 VXLAN Global-Level Verification Commands
Command | Purpose |
---|---|
show tech-support vxlan [platform ] | Displays related VXLAN tech-support information. |
show bridge-domain | Shows the bridge domain. |
show logging level nve | Displays the logging level. |
show tech-support nve | Displays related NVE tech-support information. |
show run interface nve x | Displays NVE overlay interface configuration. |
show nve interface | Displays NVE overlay interface status. |
show nve peers | Displays NVE peer status. |
show nve peers peer_IP_address interface interface_ID counters | Displays per-NVE peer statistics. |
clear nve peer-ip peer-ip-address | Clears stale NVE peers. Stale NVE peers are those that do not have MAC addresses learned behind them. |
show nve vni | Displays VXLAN VNI status. |
show nve vni ingress-replication | Displays the mapping of VNI to an ingress-replication peer list and uptime for each peer. |
show nve vni vni_number counters | Displays per-VNI statistics. |
show nve vxlan-params | Displays VXLAN parameters, such as VXLAN destination or UDP port. |
Figure 3-7 shows the VXLAN network topology with configurations.
Figure 3-7 VXLAN Control Plane Topology
Example 3-1 shows the spine router (Spine-1 and Spine-2) OSPF and multicast routing configuration, VTEP (VTEP-1 and VTEP-3) multicast routing configuration, and multicast routing verification.
Example 3-1 PIM Multicast Configurations and Verifications
Spine-1 Config Spine-1(config)# feature pim Spine-1(config)# interface loopback1 Spine-1(config-if)# ip address 192.168.0.100/32 Spine-1(config-if)# ip pim sparse-mode Spine-1(config-if)# ip router ospf 1 area 0.0.0.0 Spine-1(config)# ip pim rp-address 192.168.0.100 Spine-1(config)# ip pim anycast-rp 192.168.0.100 192.168.0.6 Spine-1(config)# ip pim anycast-rp 192.168.0.100 192.168.0.7 Spine-1(config)# interface E1/1 Spine-1(config-if)# ip pim sparse-mode Spine-1(config)# interface E1/2 Spine-1(config-if)# ip pim sparse-mode Spine-1(config)# interface E1/3 Spine-1(config-if)# ip pim sparse-mode Spine-1(config)# interface loopback0 Spine-1(config-if)# ip pim sparse-mode Spine-2 Config (PIM Redundancy) Spine-2(config)# feature pim Spine-2(config)# interface loopback1 Spine-2(config-if)# ip address 192.168.0.100/32 Spine-2(config-if)# ip pim sparse-mode Spine-2(config-if)# ip router ospf 1 area 0.0.0.0 Spine-2(config)# ip pim rp-address 192.168.0.100 Spine-2(config)# ip pim anycast-rp 192.168.0.100 192.168.0.6 Spine-2(config)# ip pim anycast-rp 192.168.0.100 192.168.0.7 Spine-2(config)# interface E1/1 Spine-2(config-if)# ip pim sparse-mode Spine-2(config)# interface E1/2 Spine-2(config-if)# ip pim sparse-mode Spine-2(config)# interface E1/3 Spine-2(config-if)# ip pim sparse-mode Spine-2(config)# interface loopback0 Spine-2(config-if)# ip pim sparse-mode VTEP-1 PIM Config VTEP-1(config)# feature pim VTEP-1(config)# ip pim rp-address 192.168.0.100 VTEP-1 (config)# interface E1/1 VTEP-1 (config-if)# ip pim sparse-mode VTEP-1 (config)# interface E1/2 VTEP-1 (config-if)# ip pim sparse-mod VTEP-1 (config)# interface loopback0 VTEP-1 (config-if)# ip pim sparse-mode VTEP-1 (config)# interface loopback1 VTEP-1 (config-if)# ip pim sparse-mode VTEP-3 PIM Config VTEP-3(config)# feature pim VTEP-3(config)# ip pim rp-address 192.168.0.100 VTEP-3(config)# interface E1/1 VTEP-3(config-if)# ip pim sparse-mode VTEP-3(config)# interface E1/2 VTEP-3(config-if)# ip pim sparse-mode VTEP-3(config)# interface loopback0 VTEP-3(config-if)# ip pim sparse-mode VTEP-3(config)# interface loopback1 VTEP-3(config-if)# ip pim sparse-mode Spine 1 Verifications Spine-1# show ip pim neighbor PIM Neighbor Status for VRF "default" Neighbor Interface Uptime Expires DR Bidir- BFD Priority Capable State 10.0.0.22 Ethernet1/1 00:02:21 00:01:23 1 yes n/a 10.0.0.26 Ethernet1/2 00:01:50 00:01:20 1 yes n/a 10.0.0.30 Ethernet1/3 00:00:37 00:01:38 1 yes n/a Spine-1# show ip pim rp PIM RP Status Information for VRF "default" BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None Anycast-RP 192.168.0.100 members: 192.168.0.6* 192.168.0.7 RP: 192.168.0.100*, (0), uptime: 00:04:29 priority: 255, RP-source: (local), group ranges: 224.0.0.0/4 Spine 2 Verifications Spine-2# show ip pim neighbor PIM Neighbor Status for VRF "default" Neighbor Interface Uptime Expires DR Bidir- BFD Priority Capable State 10.0.128.6 Ethernet1/1 00:02:21 00:01:23 1 yes n/a 10.0.128.10 Ethernet1/2 00:01:50 00:01:20 1 yes n/a 10.0.128.14 Ethernet1/3 00:00:37 00:01:38 1 yes n/a Spine-2# show ip pim rp PIM RP Status Information for VRF "default" BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None Anycast-RP 192.168.0.100 members: 192.168.0.6 192.168.0.7* RP: 192.168.0.100*, (0), uptime: 00:04:16 priority: 255, RP-source: (local), group ranges: 224.0.0.0/4 VTEP-1 Verifications VTEP-1# show ip pim neighbor PIM Neighbor Status for VRF "default" Neighbor Interface Uptime Expires DR Bidir- BFD Priority Capable State 10.0.0.21 Ethernet1/1 00:03:47 00:01:32 1 yes n/a 10.0.128.5 Ethernet1/2 00:03:46 00:01:37 1 yes n/a VTEP-1# show ip pim rp PIM RP Status Information for VRF "default" BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.0.100, (0), uptime: 00:03:53 priority: 255, RP-source: (local), group ranges: 224.0.0.0/4 VTEP-3 Verifications VTEP-3# show ip pim neighbor PIM Neighbor Status for VRF "default" Neighbor Interface Uptime Expires DR Bidir- BFD Priority Capable State 10.0.0.29 Ethernet1/1 00:03:06 00:01:21 1 yes n/a 10.0.128.13 Ethernet1/2 00:02:48 00:01:35 1 yes n/a VTEP-3(config)# show ip pim rp PIM RP Status Information for VRF "default" BSR disabled Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None RP: 192.168.0.100, (0), uptime: 00:03:11 priority: 255, RP-source: (local), group ranges: 224.0.0.0/
Example 3-2 shows the VTEP (VETP-1 and VTEP-3) VXLAN and VXLAN Network Virtual Interface (NVE) configuration and status verification.
Example 3-2 VXLAN Configurations and Verifications
VTEP-1 Config VTEP-1(config)# feature vn-segment-vlan-based VTEP-1(config)# feature vn overlay VTEP-1(config)# vlan 10 VTEP-1(config-vlan)# vn-segment 160010 VTEP-1(config)# vlan 20 VTEP-1(config-vlan)# vn-segment 160020 VTEP-1(config)# interface nve1 VTEP-1 (config-if)# source-interface loopback1 VTEP-1 (config-if)# member vni 160010 mcast-group 231.1.1.1 VTEP-1 (config-if)# member vni 160020 mcast-group 231.1.1.1 VTEP-1 (config-if)# no shutdown VTEP-3 Config VTEP-3(config)# feature vn-segment-vlan-based VTEP-3(config)# feature vn overlay VTEP-3(config)# vlan 10 VTEP-3(config-vlan)# vn-segment 160010 VTEP-3(config)# vlan 20 VTEP-3(config-vlan)# vn-segment 160020 VTEP-3(config)# interface nve1 VTEP-3(config-if)# source-interface loopback1 VTEP-3(config-if)# member vni 160010 mcast-group 231.1.1.1 VTEP-3(config-if)# member vni 160020 mcast-group 231.1.1.1 VTEP-3(config-if)# no shutdown VTEP-1 Verifications VTEP-1# show nve vni Codes: CP - Control Plane DP - Data Plane UC - Unconfigured SA - Suppress ARP SU - Suppress Unknown Unicast Interface VNI Multicast-group State Mode Type [BD/VRF] Flags --------- -------- ----------------- ----- ---- ------------------ ----- nve1 160010 231.1.1.1 Up DP L2 [10] nve1 160020 231.1.1.1 Up DP L2 [20] VTEP-1# show vxlan Vlan VN-Segment ==== ========== 10 160010 20 160020 VTEP-1# ping 10.10.10.3 PING 10.10.10.3 (10.10.10.3) : 56 data bytes 64 bytes from 10.10.10.3: icmp_seq=0 ttl=254 time=8.114 ms 64 bytes from 10.10.10.3: icmp_seq=1 ttl=254 time=5.641 ms 64 bytes from 10.10.10.3: icmp_seq=2 ttl=254 time=6.213 ms 64 bytes from 10.10.10.3: icmp_seq=3 ttl=254 time=6.119 ms VTEP-1# show nve peers Interface Peer-IP State LearnType Uptime Router-Mac --------- --------------- ----- --------- -------- ----------------- nve1 192.168.0.110 Up DP 00:09:08 n/a VTEP-1# show ip mroute IP Multicast Routing Table for VRF "default" (*, 231.1.1.1/32), uptime: 00:10:38, nve ip pim Incoming interface: Ethernet1/1, RPF nbr: 10.0.0.29 Outgoing interface list: (count: 1) nve1, uptime: 00:10:38, nve (192.168.0.18/32, 231.1.1.1/32), uptime: 00:02:34, ip mrib pim Incoming interface: Ethernet1/2, RPF nbr: 10.0.128.13 Outgoing interface list: (count: 1) nve1, uptime: 00:02:34, mrib (*, 232.0.0.0/8), uptime: 00:17:03, pim ip Incoming interface: Null, RPF nbr: 0.0.0.0 Outgoing interface list: (count: 0) VTEP-3 Verifications VTEP-3# show nve vni Codes: CP - Control Plane DP - Data Plane UC - Unconfigured SA - Suppress ARP SU - Suppress Unknown Unicast Interface VNI Multicast-group State Mode Type [BD/VRF] Flag --------- -------- ----------------- ----- ---- ------------------ ----- nve1 160010 231.1.1.1 Up DP L2 [10] nve1 160020 231.1.1.1 Up DP L2 [20] VTEP-3# show vxlan Vlan VN-Segment ==== ========== 10 160010 20 160020 VTEP-3# ping 10.10.10.1 PING 10.10.10.1 (10.10.10.1) : 56 data bytes 64 bytes from 10.10.10.1: icmp_seq=0 ttl=254 time=7.212 ms 64 bytes from 10.10.10.1: icmp_seq=1 ttl=254 time=6.243 ms 64 bytes from 10.10.10.1: icmp_seq=2 ttl=254 time=5.268 ms 64 bytes from 10.10.10.1: icmp_seq=3 ttl=254 time=6.397 ms VTEP-1# show nve peers Interface Peer-IP State LearnType Uptime Router-Mac --------- --------------- ----- --------- -------- ----------------- nve1 192.168.0.18 Up DP 00:09:08 n/a VTEP-3# show ip mroute IP Multicast Routing Table for VRF "default" (*, 231.1.1.1/32), uptime: 00:10:38, nve ip pim Incoming interface: Ethernet1/1, RPF nbr: 10.0.0.29 Outgoing interface list: (count: 1) nve1, uptime: 00:10:38, nve (192.168.0.18/32, 231.1.1.1/32), uptime: 00:02:34, ip mrib pim Incoming interface: Ethernet1/2, RPF nbr: 10.0.128.13 Outgoing interface list: (count: 1) nve1, uptime: 00:02:34, mrib (192.168.0.110/32, 231.1.1.1/32), uptime: 00:10:38, nve mrib ip pim Incoming interface: loopback1, RPF nbr: 192.168.0.110 Outgoing interface list: (count: 1) Ethernet1/2, uptime: 00:09:39, pim (*, 232.0.0.0/8), uptime: 00:17:03, pim ip Incoming interface: Null, RPF nbr: 0.0.0.0 Outgoing interface list: (count: 0)