VMM Integration with ACI at Multiple Locations
In a single ACI fabric with a single APIC cluster located at a single site or stretched between multiple sites using transit leaf, multi-pod, or remote leaf design options, individual VMM integration can be leveraged using the same policy model in any of the locations where the ACI fabric is stretched. This is because a single control and data plane has been stretched between multiple data center locations. In a dual ACI fabric or multi-site environments, separate APIC clusters are deployed in each location and, therefore, a separate VMM domain is created for each site.
Multi-Site
In order to integrate VMM domains into a Cisco ACI multi-site architecture, as mentioned earlier, you need to create separate VMM domains at each site because the sites have separate APIC clusters. Those VMM domains can then be exposed to the ACI multi-site policy manager in order to be associated to the EPGs defined at each site.
Two deployment models are possible:
▪ Multiple VMMs can be used across separate sites, each paired with the local APIC cluster.
▪ A single VMM can be used to manage hypervisors deployed across sites and paired with the different local APIC clusters.
The next two sections provide more information about these models.
Multiple Virtual Machine Managers Across Sites
In a multi-site deployment, multiple VMMs are commonly deployed in separate sites to manage the local clusters of hypervisors. Figure 6-13 shows this scenario.
The VMM at each site manages the local hosts and peers with the local APIC domain to create a local VMM domain. This model is supported by all the VMM options supported by Cisco ACI: VMware vCenter Server, Microsoft SCVMM, and OpenStack controller.
The configuration of the VMM domains is performed at the local APIC level. The created VMM domains can then be imported into the Cisco ACI multi-site policy manager and associated with the EPG specified in the centrally created templates. If, for example, EPG 1 is created at the multi-site level, it can then be associated with VMM domain DC 1 and with VMM domain DC 2 before the policy is pushed to Sites 1 and 2 for local implementation.
The creation of separate VMM domains across sites usually restricts the mobility of virtual machines across sites to cold migration scenarios. However, in specific designs using VMware vSphere 6.0 and later, you can perform hot migration between clusters of hypervisors managed by separate vCenter servers. Figure 6-14 and the list that follows demonstrate and describe the steps required to create such a configuration.
Step 1. Create a VMM domain in each fabric by peering the local vCenter server and the APIC. This peering results in the creation of local vSphere distributed switches (VDS 1 at Site 1 and VDS 2 at Site 2) in the ESXi clusters.
Step 2. Expose the created VMM domains to the Cisco ACI multi-site policy manager.
Step 3. Define a new Web EPG in a template associated with both Sites 1 and 2. The EPG is mapped to a corresponding Web bridge domain, which must be configured as stretched with flooding across sites enabled. At each site, the EPG then is associated with the previously created local VMM domain.
Step 4. Push the template policy Sites 1 and 2.
Step 5. Create the EPGs in each fabric, and because they are associated with VMM domains, each APIC communicates with the local vCenter server, which pushes an associated Web port group to each VDS.
Step 6. Connect the Web virtual machines to the newly created Web port groups. At this point, live migration can be performed across sites.
Single Virtual Machine Manager Across Sites
Figure 6-15 depicts the scenario in which a single VMM domain is used across sites.
In this scenario, a VMM is deployed in Site 1 but manages a cluster of hypervisors deployed within the same fabric and also in separate fabrics. Note that this configuration still leads to the creation of different VMM domains in each fabric, and different VDSs are pushed to the ESXi hosts that are locally deployed. This scenario essentially raises the same issues as discussed in the previous section about the support for cold and hot migration of virtual machines across fabrics.
Remote Leaf
ACI fabric allows for integration with multiple VMM domains. With this integration, the APIC pushes the ACI policy configuration—such as networking, telemetry monitoring, and troubleshooting—to switches based on the locations of virtual instances. The APIC can push the ACI policy in the same way as a local leaf. A single VMM domain can be created for compute resources connected to both the ACI main DC pod and remote leaf switches. VMM/APIC integration is also used to push a VDS to hosts managed by the VMM and to dynamically create port groups as a result of the creation of EPGs and their association to the VMM domain. This allows you to enable mobility (“live” or “cold”) for virtual endpoints across different compute hypervisors.
Virtual instances in the same EPG or Layer 2 domain (VLAN) can be behind the local leaf as well as the remote leaf. When a virtual instance moves from the remote leaf to the local leaf or vice versa, the APIC detects the leaf switches where virtual instances are moved and pushes the associated policies to new leafs. All VMM and container domain integration supported for local leafs is supported for remotes leaf as well.
Figure 6-16 shows the process of vMotion with the ACI fabric.
The following events happen during a vMotion event:
Step 1. The VM has IP address 10.10.10.100 and is part of the Web EPG and the Web bridge domain with subnet 10.10.10.1/24. When the VM comes up, the ACI fabric programs the encapsulation VLAN (vlan-100) and the switch virtual interface (SVI), which is the default gateway of the VM on the leaf switches where the VM is connected. The APIC pushes the contract and other associated policies based on the location of the VM.
Step 2. When the VM moves from a remote leaf to a local leaf, the ACI detects the location of the VM through the VMM integration.
Step 3. Depending on the EPG-specific configuration, the APIC may need to push the ACI policy on the leaf for successful VM mobility, or a policy may already exist on the destination leaf.