Kubernetes Integration
Kubernetes is a portable, extensible open-source platform that automates the deployment, scaling, and management of container-based workloads and services in a network. Beginning with Cisco APIC Release 3.0(1), you can integrate Kubernetes on bare-metal servers into Cisco ACI.
To integrate Kubernetes with Cisco ACI, you need to execute a series of tasks. Some of them you perform in the network to set up the Cisco APIC; others you perform on the Kubernetes server. Once you have integrated Kubernetes, you can use the Cisco APIC to view Kubernetes in the Cisco ACI.
The following are the basic tasks involved in integrating Kubernetes into the Cisco ACI fabric:
Step 1. Prepare for the integration and set up the subnets and VLANs in the network.
Step 2. Fulfill the prerequisites.
Step 3. To provision the Cisco APIC to integrate with Kubernetes, download the provisioning tool, which includes a sample configuration file, and update the configuration file with information you previously gathered about your network. Then run the provisioning tool with the information about your network.
Step 4. Set up networking for the node to support Kubernetes installation. This includes configuring an uplink interface, subinterfaces, and static routes.
Step 5. Install Kubernetes and Cisco ACI containers.
Step 6. Use the Cisco APIC GUI to verify that Kubernetes has been integrated into Cisco ACI.
The following sections provide details on these steps.
Planning for Kubernetes Integration
Various network resources are required to provide capabilities to a Kubernetes cluster, including several subnets and routers. You need the following subnets:
▪ Node subnet: This subnet is used for Kubernetes control traffic. It is where the Kubernetes API services are hosted. Make the node subnet a private subnet and make sure that it has access to the Cisco APIC management address.
▪Pod subnet: This is the subnet from which the IP addresses of Kubernetes pods are allocated. Make the pod subnet a private subnet.
▪ Node service subnet: This subnet is used for internal routing of load-balanced service traffic. Make the node service subnet a private subnet.
▪ External service subnets: These subnets are pools from which load-balanced services are allocated as externally accessible service IP addresses.
You need the following VLANs for local fabric use:
▪ Node VLAN: This VLAN is used by the physical domain for Kubernetes nodes.
▪ Service VLAN: This VLAN is used for delivery of load-balanced service traffic.
▪ Infrastructure VLAN: This is the infrastructure VLAN used by the Cisco ACI fabric.
Prerequisites for Integrating Kubernetes with Cisco ACI
Ensure that the following prerequisites are in place before you try to integrate Kubernetes with the Cisco ACI fabric:
▪ A working Cisco ACI installation
▪ An attachable entity profile (AEP) set up with interfaces that are desired for the Kubernetes deployment
▪ An L3Out connection, along with a Layer 3 external network to provide external access
▪ Virtual routing and forwarding (VRF)
▪ Any required route reflector configuration for the Cisco ACI fabric
▪ A next-hop router that is connected to the Layer 3 external network and that is capable of appropriate external access and configured with the required routes
In addition, the Kubernetes cluster must be up through the fabric-connected interface on all the hosts. The default route should be pointing to the ACI node subnet bridge domain. This is not mandatory, but it simplifies the routing configuration on the hosts and is the recommend configuration. If you choose not to use this design, all Kubernetes-related traffic must go through the fabric.
Provisioning Cisco ACI to Work with Kubernetes
You can use the acc_provision tool to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco ACI container components. The procedure to accomplish this is as follows:
Step 1. Download the provisioning tool from
https://software.cisco.com/download/type.html?mdfid=285968390&i=rm and then follow these steps:
a. Click APIC OpenStack and Container Plugins.
b. Choose the package that you want to download.
c. Click Download.
Step 2. Generate a sample configuration file that you can edit by entering the following command:
terminal$ acc-provision--sample
This command generates the aci-containers-config.yaml configuration file, which looks as follows:
# # Configuration for ACI Fabric # aci_config: system_id: mykube # Every opflex cluster must have a distinct ID apic_hosts: # List of APIC hosts to connect for APIC API - 10.1.1.101 vmm_domain: # Kubernetes VMM domain configuration encap_type: vxlan # Encap mode: vxlan or vlan mcast_range: # Every opflex VMM must use a distinct range start: 225.20.1.1 end: 225.20.255.255 # The following resources must already exist on the APIC, # they are used, but not created by the provisioning tool. aep: kube-cluster # The AEP for ports/VPCs used by this cluster vrf: # This VRF used to create all Kubernetes EPs name: mykube-vrf tenant: common # This can be system-id or common l3out: name: mykube_l3out # Used to provision external IPs external_networks: - mykube_extepg # Used for external contracts # # Networks used by Kubernetes # net_config: node_subnet: 10.1.0.1/16 # Subnet to use for nodes pod_subnet: 10.2.0.1/16 # Subnet to use for Kubernetes Pods extern_dynamic: 10.3.0.1/24 # Subnet to use for dynamic external IPs extern_static: 10.4.0.1/24 # Subnet to use for static external IPs node_svc_subnet: 10.5.0.1/24 # Subnet to use for service graph ←This is not the same as the Kubernetes service-cluster-ip-range: Use different subnets. kubeapi_vlan: 4001 # The VLAN used by the physdom for nodes service_vlan: 4003 # The VLAN used by LoadBalancer services infra_vlan: 4093 # The VLAN used by ACI infra # # Configuration for container registry # Update if a custom container registry has been setup # registry: image_prefix: noiro # e.g: registry.example.com/ noiro # image_pull_secret: secret_name # (if needed)
Step 3. Edit the sample configuration file, providing information from your network, and save the file.
Step 4. Provision the Cisco ACI fabric by using the following command:
acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password]
This command generates the file aci-containers.yaml, which you use after installing Kubernetes. It also creates the files user-[system id].key and user-[system id].crt, which contain the certificate used to access the Cisco APIC. Save these files in case you change the configuration later and want to avoid disrupting a running cluster because of a key change.
Step 5. (Optional) Configure advanced optional parameters to adjust to custom parameters other than the ACI default values or base provisioning assumptions. For example, if your VMM’s multicast address for the fabric is different from 225.1.2.3, you can configure it by using the following:
aci_config: vmm_domain: mcast_fabric: 225.1.2.3
If you are using VLAN encapsulation, you can specify the VLAN pool for it, as follows:
aci_config: vmm_domain: encap_type: vlan vlan_range: start: 10 end: 25
If you want to use an existing user, key, certificate, add the following:
aci_config: sync_login: username: <name> certfile: <pem-file> keyfile: <pem-file>
If you are provisioning in a system nested inside virtual machines, enter the name of an existing preconfigured VMM domain in Cisco ACI into the aci_config section under the vmm_domain of the configuration file:
nested_inside: type: vmware name: myvmware
Preparing the Kubernetes Nodes
When you are done provisioning Cisco ACI to work with Kubernetes, you can start preparing the networking construct for the Kubernetes nodes by following this procedure:
Step 1. Configure your uplink interface with or without NIC bonding, depending on how your AAEP is configured. Set the MTU on this interface to 1600.
Step 2. Create a subinterface on your uplink interface on your infrastructure VLAN. Configure this subinterface to obtain an IP address by using DHCP. Set the MTU on this interface to 1600.
Step 3. Configure a static route for the multicast subnet 224.0.0.0/4 through the uplink interface used for VXLAN traffic.
Step 4. Create a subinterface (for example, kubeapi_vlan) on the uplink interface on your node VLAN in the configuration file. Configure an IP address on this interface in your node subnet. Then set this interface and the corresponding node subnet router as the default route for the node.
Step 5. Create the /etc/dhcp/dhclient-eth0.4093.conf file with the following content, inserting the MAC address of the Ethernet interface for each server on the first line of the file:
send dhcp-client-identifier 01:<mac-address of infra VLAN interface>; request subnet-mask, domain-name, domain-name-servers, host-name; send host-name <server-host-name>; option rfc3442-classless-static-routes code 121 = array of unsigned integer 8; option ms-classless-static-routes code 249 = array of unsigned integer 8; option wpad code 252 = string; also request rfc3442-classless-static-routes; also request ms-classless-static-routes; also request static-routes; also request wpad; also request ntp-servers;
The network interface on the infrastructure VLAN requests a DHCP address from the APIC infrastructure network for OpFlex communication. Make sure the server has a dhclient configuration for this interface to receive all the correct DHCP options with the lease.
Step 6. If you have a separate management interface for the node being configured, configure any static routes that you need to access your management network on the management interface.
Step 7. Ensure that OVS is not running on the node.
Here is an example of the interface configuration (in /etc/network/interfaces):
# Management network interface (not connected to ACI) auto ens160 iface ens160 inet static address 192.168.66.17 netmask 255.255.255.0 up route add -net 10.0.0.0/8 gw 192.168.66.1 dns-nameservers 192.168.66.1 # Interface connected to ACI auto ens192 iface ens192 inet manual mtu 1600 # ACI Infra VLAN auto ens192.3095 iface ens192.3095 inet dhcp mtu 1600 up route add -net 224.0.0.0/4 dev ens192.3095 vlan-raw-device ens192 # Node Vlan auto ens192.4001 iface ens192.4001 inet static address 12.1.0.101 netmask 255.255.0.0 mtu 1600 gateway 12.1.0.1 vlan-raw-device ens192
Installing Kubernetes and Cisco ACI Containers
After you provision Cisco ACI to work with Kubernetes and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose, as long as it is appropriate to your environment. This procedure provides guidance and high-level instruction for installation; for details, consult Kubernetes documentation.
When installing Kubernetes, ensure that the API server is bound to the IP addresses on the node subnet and not to management or other IP addresses. Issues with node routing table configuration and API server advertisement addresses are the most common problems during installation. If you have problems, therefore, check these issues first.
Install Kubernetes so that it is configured to use a Container Network Interface (CNI) plug-in, but do not install a specific CNI plug-in configuration through your installer. Instead, deploy the CNI plug-in. To install the CNI plug-in, use the following command:
kubectl apply -f aci-containers.yaml
Verifying the Kubernetes Integration
After you have performed the steps described in the preceding sections, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. The procedure to do this is as follows:
Step 1. Log in to the Cisco APIC.
Step 2. Go to Tenants > tenant_name, where tenant_name is the name you specified in the configuration file that you edited and used in installing Kubernetes and the ACI containers.
Step 3. In the tenant navigation pane, expand the following: tenant_name > Application Profiles > application_profile_name > Application EPGs. You should see three folders inside the Application EPGs folder:
▪ kube-default: The default EPG for containers that are otherwise not mapped to any specific EPG.
▪ kube-nodes: The EPG for the Kubernetes nodes.
▪ kube-system: The EPG for the kube-system Kubernetes namespace. This typically contains the kube-dns pods, which provide DNS services for a Kubernetes cluster.
Step 4. In the tenant navigation pane, expand the Networking and Bridge Domains folders. You should see two bridge domains:
▪ node-bd: The bridge domain used by the node EPG
▪ pod-bd: The bridge domain used by all pods
Step 5. If you deploy Kubernetes with a load balancer, go to Tenants > common, expand L4-L7 Services, and perform the following steps:
▪ Open the L4-L7 Service Graph Templates folder; you should see a template for Kubernetes.
▪ Open the L4-L7 Devices folder; you should see a device for Kubernetes.
▪ Open the Deployed Graph Instances folder; you should see an instance for Kubernetes.
Step 6. Go to VM Networking > Inventory, and in the Inventory navigation pane, expand the Kubernetes folder. You should see a VMM domain, with the name you provided in the configuration file, and in that domain you should see folders called Nodes and Namespaces.