OpenShift Integration
OpenShift is a container application platform that is built on top of Docker and Kubernetes that makes it easy for developers to create applications and provides a platform for operators that simplifies deployment of containers for both development and production workloads. Beginning with Cisco APIC Release 3.1(1), OpenShift can be integrated with Cisco ACI by leveraging the ACI CNI plug-in.
To integrate Red Hat OpenShift with Cisco ACI, you must perform a series of tasks. Some tasks are performed by the ACI fabric administrator directly on the APIC, and others are performed by the OpenShift cluster administrator. After you have integrated the Cisco ACI CNI plug-in for Red Hat OpenShift, you can use the APIC to view OpenShift endpoints and constructs within the fabric.
The following is a high-level look at the tasks required to integrate OpenShift with the Cisco ACI fabric:
Step 1. To prepare for the integration, identify the subnets and VLANs that you will use in your network.
Step 2. Perform the required Day 0 fabric configurations.
Step 3. Configure the Cisco APIC for the OpenShift cluster. Many of the required fabric configurations are performed directly with a provisioning tool (acc-provision). The tool is embedded in the plug-in files from www.cisco.com. Once downloaded and installed, modify the configuration file with the information from the planning phase and run the tool.
Step 4. Set up networking for the node to support OpenShift installation. This includes configuring an uplink interface, subinterfaces, and static routes.
Step 5. Install OpenShift and Cisco ACI containers.
Step 6. Update the OpenShift router to use the ACI fabric.
Step 7. Use the Cisco APIC GUI to verify that OpenShift has been integrated into the Cisco ACI.
The following sections provide details on these steps.
Planning for OpenShift Integration
The OpenShift cluster requires various network resources, all of which are provided by the ACI fabric integrated overlay. The OpenShift cluster requires the following subnets:
Node subnet: This is the subnet used for OpenShift control traffic. This is where the OpenShift API services are hosted. The acc-provisioning tool configures a private subnet. Ensure that it has access to the Cisco APIC management address.
Pod subnet: This is the subnet from which the IP addresses of OpenShift pods are allocated. The acc-provisioning tool configures a private subnet.
▪ Node service subnet: This is the subnet used for internal routing of load-balanced service traffic. The acc-provisioning tool configures a private subnet.
▪ External service subnets: These are pools from which load-balanced services are allocated as externally accessible service IP addresses.
The externally accessible service IP addresses could be globally routable. Configure the next-hop router to send traffic destined for IP addresses to the fabric. There are two such pools: One is used for dynamically allocated IPs, and the other is available for services to request a specific fixed external IP address.
All of the aforementioned subnets must be specified on the acc-provisioning configuration file. The node pod subnets are provisioned on corresponding ACI bridge domains that are created by the provisioning tool. The endpoints on these subnets are learned as fabric endpoints and can be used to communicate directly with any other fabric endpoint without NAT, provided that contracts allow communication. The node service subnet and the external service subnet are not seen as fabric endpoints but are instead used to manage the cluster IP address and the load balancer IP address, respectively, and are programmed on Open vSwitch via OpFlex. As mentioned earlier, the external service subnet must be routable outside the fabric.
OpenShift nodes need to be connected on an EPG using VLAN encapsulation. Pods can connect to one or multiple EPGs and can use either VLAN or VXLAN encapsulation. In addition, PBR-based load balancing requires the use of a VLAN encapsulation to reach the OpFlex service endpoint IP address of each OpenShift node. The following VLAN IDs are therefore required:
▪ Node VLAN ID: The VLAN ID used for the EPG mapped to a physical domain for OpenShift nodes
▪ Service VLAN ID: The VLAN ID used for delivery of load-balanced external service traffic
▪ The fabric infrastructure VLAN ID: The infrastructure VLAN used to extend OpFlex to the OVS on the OpenShift nodes
Prerequisites for Integrating OpenShift with Cisco ACI
Ensure that the following prerequisites are in place before you try to integrate OpenShift with the Cisco ACI fabric:
▪ A working Cisco ACI fabric running a release that is supported for the desired OpenShift integration
▪ An attachable entity profile (AEP) set up with the interfaces desired for the OpenShift deployment (When running in nested mode, this is the AEP for the VMM domain on which OpenShift will be nested.)
▪ An L3Out connection, along with a Layer 3 external network to provide external access
▪ VRF
▪ Any required route reflector configuration for the Cisco ACI fabric
In addition, ensure that the subnet used for external services is routed by the next-hop router that is connected to the selected ACI L3Out interface. This subnet is not announced by default, so either static routes or appropriate configuration must be considered.
In addition, the OpenShift cluster must be up through the fabric-connected interface on all the hosts. The default route on the OpenShift nodes should be pointing to the ACI node subnet bridge domain. This is not mandatory, but it simplifies the routing configuration on the hosts and is the recommend configuration. If you do not follow this design, ensure that the OpenShift node routing is correctly used so that all OpenShift cluster traffic is routed through the ACI fabric.
Provisioning Cisco ACI to Work with OpenShift
You can use the acc_provision tool to provision the fabric for the OpenShift VMM domain and generate a .yaml file that OpenShift uses to deploy the required Cisco ACI container components. This tool requires a configuration file as input and performs two actions as output:
▪ It configures relevant parameters on the ACI fabric.
▪ It generates a YAML file that OpenShift administrators can use to install the ACI CNI plug-in and containers on the cluster.
The procedure to provision Cisco ACI to work with OpenShift is as follows:
Step 1. Download the provisioning tool from https://software.cisco.com/download/type.html?mdfid=285968390&i=rm and then follow these steps:
a. Click APIC OpenStack and Container Plugins.
b. Choose the package that you want to download.
c. Click Download.
Step 2. Generate a sample configuration file that you can edit by entering the following command:
terminal$ acc-provision--sample
This command generates the aci-containers-config.yaml configuration file, which looks as follows:
# # Configuration for ACI Fabric # aci_config: system_id: mykube # Every opflex cluster must have a distinct ID apic_hosts: # List of APIC hosts to connect for APIC API - 10.1.1.101 vmm_domain: # Kubernetes VMM domain configuration encap_type: vxlan # Encap mode: vxlan or vlan mcast_range: # Every opflex VMM must use a distinct range start: 225.20.1.1 end: 225.20.255.255 # The following resources must already exist on the APIC, # they are used, but not created by the provisioning tool. aep: kube-cluster # The AEP for ports/VPCs used by this cluster vrf: # This VRF used to create all kubernetes EPs name: mykube-vrf tenant: common # This can be system-id or common l3out: name: mykube_l3out # Used to provision external IPs external_networks: - mykube_extepg # Used for external contracts # # Networks used by Kubernetes # net_config: node_subnet: 10.1.0.1/16 # Subnet to use for nodes pod_subnet: 10.2.0.1/16 # Subnet to use for Kubernetes Pods extern_dynamic: 10.3.0.1/24 # Subnet to use for dynamic external IPs node_svc_subnet: 10.5.0.1/24 # Subnet to use for service graph<- This is not the same as openshift_ portal_net: Use different subnets. kubeapi_vlan: 4001 # The VLAN used by the physdom for nodes service_vlan: 4003 # The VLAN used by LoadBalancer services infra_vlan: 4093 # The VLAN used by ACI infra # # Configuration for container registry # Update if a custom container registry has been setup # registry: image_prefix: noiro # e.g: registry.example.com/ noiro # image_pull_secret: secret_name # (if needed)
Step 3. Edit the sample configuration file with the relevant values for each of the subnets, VLANs, and so on, as appropriate to your planning, and then save the file.
Step 4. Provision the Cisco ACI fabric by using the following command:
acc-provision -f openshift-<version> -c aci-containers- config.yaml -o aci-containers.yaml \ -a -u [apic username] -p [apic password]
This command generates the file aci-containers.yaml, which you use after installing OpenShift. It also creates the files user-[system id].key and user-[system id].crt, which contain the certificate that is used to access the Cisco APIC. Save these files in case you change the configuration later and want to avoid disrupting a running cluster because of a key change.
Step 5. (Optional) Configure advanced optional parameters to adjust to custom parameters other than the ACI default values or base provisioning assumptions. For example, if your VMM’s multicast address for the fabric is different from 225.1.2.3, you can configure it by adding the following:
aci_config: vmm_domain: mcast_fabric: 225.1.2.3
If you are using VLAN encapsulation, you can specify the VLAN pool for it, as follows:
aci_config: vmm_domain: encap_type: vlan vlan_range: start: 10 end: 25
If you want to use an existing user, key, certificate, add the following:
aci_config: sync_login: username: <name> certfile: <pem-file> keyfile: <pem-file>
If you are provisioning in a system nested inside virtual machines, enter the name of an existing preconfigured VMM domain in Cisco ACI into the aci_config section under the vmm_domain of the configuration file:
nested_inside: type: vmware name: myvmware
Preparing the OpenShift Nodes
After you provision Cisco ACI, you prepare networking for the OpenShift nodes by following this procedure:
Step 1. Configure your uplink interface with or without NIC bonding, depending on how your AAEP is configured. Set the MTU on this interface to 1600.
Step 2. Create a subinterface on your uplink interface on your infrastructure VLAN. Configure this subinterface to obtain an IP address by using DHCP. Set the MTU on this interface to 1600.
Step 3. Configure a static route for the multicast subnet 224.0.0.0/4 through the uplink interface that is used for VXLAN traffic.
Step 4. Create a subinterface (for example, kubeapi_vlan) on the uplink interface on your node VLAN in the configuration file. Configure an IP address on this interface in your node subnet. Then set this interface and the corresponding node subnet router as the default route for the node.
Step 5. Create the /etc/dhcp/dhclient-eth0.4093.conf file with the following content, inserting the MAC address of the Ethernet interface for each server on the first line of the file:
send dhcp-client-identifier 01:<mac-address of infra VLAN interface>; request subnet-mask, domain-name, domain-name-servers, host-name; send host-name <server-host-name>; option rfc3442-classless-static-routes code 121 = array of unsigned integer 8; option ms-classless-static-routes code 249 = array of unsigned integer 8; option wpad code 252 = string; also request rfc3442-classless-static-routes; also request ms-classless-static-routes; also request static-routes; also request wpad; also request ntp-servers;
The network interface on the infrastructure VLAN requests a DHCP address from the Cisco APIC infrastructure network for OpFlex communication. The server must have a dhclient configuration for this interface to receive all the correct DHCP options with the lease.
Step 6. If you have a separate management interface for the node being configured, configure any static routes required to access your management network on the management interface.
Step 7. Ensure that OVS is not running on the node.
Here is an example of the interface configuration (in /etc/network/interfaces):
# Management network interface (not connected to ACI) # /etc/sysconfig/network-scripts/ifcfg-eth0 NAME=eth0 DEVICE=eth0 ONBOOT=yes BOOTPROTO=none TYPE=Ethernet IPADDR=192.168.66.17 NETMASK=255.255.255.0 PEERDNS=no DNS1=192.168.66.1 # /etc/sysconfig/network-scripts/route-eth0 ADDRESS0=10.0.0.0 NETMASK0=255.0.0.0 GATEWAY0=192.168.66.1 # Interface connected to ACI # /etc/sysconfig/network-scripts/ifcfg-eth1 NAME=eth1 DEVICE=eth1 ONBOOT=yes BOOTPROTO=none TYPE=Ethernet IMTU=1600 # ACI Infra VLAN # /etc/sysconfig/network-scripts/ifcfg-4093 VLAN=yes TYPE=Vlan PHYSDEV=eth1 VLAN_ID=4093 REORDER_HDR=yes BOOTPROTO=dhcp DEFROUTE=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=4093 DEVICE=eth1.4093 ONBOOT=yes MTU=1600 # /etc/sysconfig/network-scripts/route-4093 ADDRESS0=224.0.0.0 NETMASK0=240.0.0.0 METRIC0=1000 # Node Vlan # /etc/sysconfig/network-scripts/ifcfg-node-vlan-4001 VLAN=yes TYPE=Vlan PHYSDEV=eth1 VLAN_ID=4001 REORDER_HDR=yes BOOTPROTO=none IPADDR=12.1.0.101 PREFIX=24 GATEWAY=12.1.0.1 DNS1=192.168.66.1 DEFROUTE=yes IPV6INIT=no NAME=node-vlan-4001 DEVICE=eth1.4001 ONBOOT=yes MTU=1600
Installing OpenShift and Cisco ACI Containers
After you provision Cisco ACI and prepare the OpenShift nodes, you can install OpenShift and ACI containers. You can use any installation method appropriate to your environment. We recommend using this procedure to install the OpenShift and Cisco ACI containers.
When installing OpenShift, ensure that the API server is bound to the IP addresses on the node subnet and not to management or other IP addresses. Issues with node routing table configuration, API server advertisement addresses, and proxies are the most common problems during installation. If you have problems, therefore, check these issues first.
The procedure for installing OpenShift and Cisco ACI containers is as follows:
Step 1. Install OpenShift by using the following command:
git clone https://github.com/noironetworks/openshift-ansible/ tree/release-3.9 git checkout release–3.9
Follow the installation procedure provided at https://docs.openshift.com/container-platform/3.9/install_config/install/advanced_install.html. Also consider the configuration overrides listed at https://github.com/noironetworks/openshift-ansible/tree/release-3.9/roles/aci.
Step 2. Install the CNI plug-in by using the following command:
oc apply -f aci-containers.yaml
Updating the OpenShift Router to Use the ACI Fabric
To update the OpenShift router to use the ACI fabric, follow these steps:
Step 1. Remove the old router by entering the commands such as the following:
oc delete svc router oc delete dc router
Step 2. Create the container networking router by entering a command such as the following:
oc adm router --service-account=router --host-network=false
Step 3. Expose the router service externally by entering a command such as the following:
oc patch svc router -p '{"spec":{"type": "LoadBalancer"}}'
Verifying the OpenShift Integration
After you have performed the steps described in the preceding sections, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. The procedure to do this is as follows:
Step 1. Log in to the Cisco APIC.
Step 2. Go to Tenants > tenant_name, where tenant_name is the name you specified in the configuration file that you edited and used in installing OpenShift and the ACI containers.
Step 3. In the tenant navigation pane, expand the following: tenant_name > Application Profiles > application_profile_name > Application EPGs. You should see three folders inside the Application EPGs folder:
▪ kube-default: The default EPG for containers that are otherwise not mapped to any specific EPG.
▪ kube-nodes: The EPG for the OpenShift nodes.
▪ kube-system: The EPG for the kube-system OpenShift namespace. This typically contains the kube-dns pods, which provide DNS services for a OpenShift cluster.
Step 4. In the tenant navigation pane, expand the Networking and Bridge Domains folders, and you should see two bridge domains:
▪ node-bd: The bridge domain used by the node EPG
▪ pod-bd: The bridge domain used by all pods
Step 5. If you deploy OpenShift with a load balancer, go to Tenants > common, expand L4-L7 Services, and perform the following steps:
a. Open the L4-L7 Service Graph Templates folder; you should see a template for OpenShift.
b. Open the L4-L7 Devices folder; you should see a device for OpenShift.
c. Open the Deployed Graph Instances folder; you should see an instance for OpenShift.
Step 6. Go to VM Networking > Inventory, and in the Inventory navigation pane, expand the OpenShift folder. You should see a VMM domain, with the name you provided in the configuration file, and in that domain you should see folders called Nodes and Namespaces.