Intent-Based Networking Designs
Intent-Based Networking is a perspective to a Cisco Digital Network Architecture. This means that specific designs and technologies are still required to allow a campus network to become Intent-enabled. The following sections describe two common Intent-Based designs (Software-Defined Access or using classic VLANs, known as non-Fabric) that can be used for a campus network to enable IBN.
Before the two Intent-Based designs are described, it is important to be aware that both designs have a certain set of requirements in common:
Policy-centric network: First and foremost is the requirement that an Intent-Based design is based on a policy-centric environment and not a port-centric design. In other words, the network is not configured on a port-by-port basis, but uses a central policy server that pushes the required network port configuration as a policy to a network port.
All policies for endpoints are pushed from this policy server into the network. This is key to enable intent onto a network, as the intent for an endpoint can change over time based on circumstances.
For a specific policy to be set to an access port (or wireless network), it is necessary to know which endpoint is connecting to the network. Network Access Control (using IEEE 802.1X standard or MAC Authentication Bypass) is required to identify the endpoint requesting access to the network and to provide it with the proper authorization onto the network (by sending specific policies to the switch using RADIUS). A RADIUS deployment, such as Cisco Identity Services Engine, is thus required for IBN.
Microsegmentation: To greatly enhance the security and tightly integrate it within Cisco DNA (and thus IBN), you should be able to segment a network into smaller bits than an IP subnet based on specific policies. This mechanism, already used in the datacenter, is called microsegmentation. Microsegmentation creates the possibility of having a single IP network for all IoT devices and having a policy that only IoT sensors are allowed to communicate with a local storage device where the data for those sensors are stored, while other IoT devices do not have access to that storage device. This microsegmentation must be based on a policy and be able to be programmatically applied to the network. Scalable Group Tags (SGT, formerly known as Security Group Tags) are used within a Software Defined Access (SDA) network (more on SDA in the next section) to provide this microsegmentation. Appendix A, “Campus Network Technologies,” describes in more detail how SGTs facilitate the required microsegmentation.
Feedback from network: One of the true distinctions between a classic campus network as described in Chapter 1, “Classic Campus Network Deployments,” and IBN is the feedback of the status of the network back to the controller. In other words, within an IBN the network devices provide feedback to the controller about the network’s state. This feedback is used to validate whether the network is accomplishing the intent required. This feedback is of course received programmatically or via telemetry. Several methods and technologies are available to provide this feedback. The technical details for feedback are described in Appendix A, “Campus Network Technologies.”
SDA
Software-Defined Access (SDA) is one of the latest technologies introduced in campus networks. It is the most complete technology (or actually a combination of technologies) that can enable Intent-Based Networking on your network.
The key concept of SDA is that there is a single, fixed underlay network and one or more overlay networks (running on top of the underlay). This concept in itself is not new; it is the founding principle for any network where encapsulating and decapsulating data allows the data to be abstracted from the different OSI layers. This principle is also used for VPNs over the Internet, CAPWAP tunnels for wireless communication, and within datacenters.
The principle of an underlay and overlay network can best be described using a common technology in enterprise networks—the Cisco Remote Access VPN solution based on Cisco AnyConnect. This technology allows end users to connect securely to the enterprise network via a less secure network (Internet).
This is realized by creating specific group policies (and pools of IP addresses) on the VPN headend device (a Cisco ASA firewall or Cisco Firepower Threat Defense firewall).
Users use the AnyConnect client to connect to the VPN headend over the Internet. Based on the authentication and authorization, users are allocated a specific internal IP address and the policies determining their access to the enterprise network. The user’s endpoint uses the internal IP address to communicate with the enterprise network. This is accomplished by encapsulating the internal IP addresses into an outer packet destined to the VPN headend.
At the VPN headend, the packet is decapsulated and routed into the enterprise network. A similar path is realized for return traffic. The enterprise network only knows that the IP address of that user needs to be sent to the VPN headend. The VPN headend takes the internal traffic and encapsulates it in an outer packet with the destination of the public IP address of the end user.
In this example, the underlay network is the Internet, and the overlay network is the specific VPN group policy that the user is assigned to with the appropriate IP pool. SDA takes the same principle but then applies it internally to the campus network. SDA calls the underlay network a campus network, and it uses virtual networks on top of the underlay to logically separate endpoints. In other words, there are no VLANs within an SDA fabric. Figure 5-2 provides an overview of an SDA network.
FIGURE 5-2 Overview of an SDA Network
SDA uses its own terminology to describe the roles and functions the switches (or in some cases routers) perform within the SDA fabric:
Virtual Network: A virtual network is used to logically separate devices from each other. It can be compared with the way VLANs are used to logically separate devices on a switched network. A virtual network can be IPv4 or IPv6 based with one or more pools of IP addresses, but it can also be used to create a logical Layer 2 network. Each virtual network has its own routing and forwarding table within the fabric, comparable with VRF-Lite on switches. This principle provides the logical separation of the virtual networks.
Fabric: A fabric is the foundation of the overlay network, which is used to implement the different virtual networks that run within a network. A fabric is a logically defined grouping of a set of switches within the campus network, for example, a single location. The fabric encompasses the protocols and technologies to transport data from the different virtual networks over an underlay network. Because the underlay network is IP based, it is relatively easy to stretch the underlay network across fiber connections on the campus (connecting multiple buildings into a single fabric) or even across a WAN (such as MPLS or an SD-WAN), factoring in specific requirements for SDA. These requirements are explained in Appendix A, “Campus Network Technologies.”
The underlay network: The underlay network is an IPv4 network that connects all nodes within the fabric. An internal routing protocol (within an SDA-campus IS-IS is commonly used, although OSPF is also possible) exchanges route information within the fabric between the nodes. The underlay network is used to transport the data from the different virtual networks to the different nodes.
Edge node: The edge node is used to allow endpoints to connect to the fabric. It essentially provides the same function as the access switch layer in a classic campus network topology. From an SDA perspective, the edge node is responsible for encapsulating and decapsulating the traffic for that endpoint in the appropriate virtual network. It also provides the primary role of forwarding traffic from the endpoint to the rest of the network.
Border node: A fabric is always connected with external networks. The border node is used to connect the different virtual networks to external networks. It is essentially the default gateway of the virtual network to external networks. As each virtual network is logically separated, the border node maintains a connection to the external network for each individual virtual network. All traffic from the external network is encapsulated and decapsulated to a specific virtual network, so that the underlay network can be used to transport that data to the correct edge node.
Control node: The control node is a function that cannot be related to a function within an existing classic campus network topology. The control node is responsible for maintaining a database of all endpoints connected to the fabric. The database contains information on which endpoint is connected to which edge node and within which virtual network. It is the glue that connects the different roles. Edge nodes and border nodes use the control node to look up the destination of a packet on the underlay network to forward the inner packet to the right edge node.
How SDA Works
Now that the roles, functions, and concept of an underlay/overlay network are known, how does SDA operate? What does an SDA network look like? The following paragraphs describe the way endpoints within a virtual network communicate with each other. Figure 5-3 provides an example topology of an SDA network.
FIGURE 5-3 Sample SDA Network
In this SDA fabric there are three switches. The CSW1 switch provides the Border and Control functionality, while SW1 and SW2 are edge node devices in this fabric. Both SW1 and SW2 have an IP link to the CSW switch, using 192.168.0.0/30 and 192.168.0.4/30 subnets. There is a virtual network (VN) named green on top of the underlay network, which uses the IP network 10.0.0.0/24 for clients. PC1 has IP address 10.0.0.4, and PC2 has IP address 10.0.0.5. The default gateway for VN Green is 10.0.0.1.
CSW1 maintains a table of endpoints connected to the fabric and how to reach them. To explain the concept and operations, Table 5-3 describes the required contents for this example.
Table 5-3 Overview of Fabric-Connected Devices in CSW1
Endpoint Name |
IP |
Network |
SGT |
VN ID |
Reachable Via |
PC1 |
10.0.0.4 |
|
Employee |
Green |
192.168.0.2 |
PC2 |
10.0.0.5 |
|
Guest |
Green |
192.168.0.6 |
Internet |
|
0.0.0.0 |
Any |
Green |
192.168.0.1, 192.168.0.5 |
In this network, if PC1 wants to communicate with www.myserver.com (IP 209.165.200.225), the following would happen:
After DNS resolution, the PC sends a TCP SYN packet to the default gateway (10.0.0.1) for destination 209.165.200.225.
SW1 as edge switch receives this packet and, as it is an anycast gateway (see Appendix A, “Campus Network Technologies” for more details), the packet is analyzed.
SW1 performs a lookup on the CSW1 (as control node) for the destination 209.165.200.225.
CSW1 returns a response for the lookup ip-address 192.168.0.1 (IP address of border node).
SW1 then encapsulates the complete TCP SYN packet in an SDA underlay network packet with source IP address 192.168.0.2 and destination address 192.168.0.1 and uses the global routing table to forward this new packet.
CSW1 receives the encapsulated underlay packet from SW1 and decapsulates it. It then, as border router, uses the routing table of VN Green to forward the traffic to the Internet.
The server www.myserver.com receives the TCP-SYN packet and generates a response with a SYN-ACK packet back to 10.0.0.4.
The incoming SYN-ACK packet is received by CSW1 in the VN Green network. The destination of the packet is 10.0.0.4.
CSW1 performs a lookup to the control node for VN Green and IP address 10.0.0.4 and gets 192.168.0.2 as the underlay destination.
CSW encapsulates the SYN-ACK packet for 10.0.0.4 into an underlay packet with destination 192.168.0.2.
The underlay packet is routed to SW1.
SW1 decapsulates the packet, recognizes it is for PC1 (IP 10.0.0.4 ) on VN Green, and forwards, based on a local table, the packet to the proper access port.
PC1 receives the SYN-ACK packet and responds with an ACK to further establish the TCP flow. The principle of lookup is repeated by SW1 for each packet received from or sent to PC1.
The preceding steps provide a conceptual overview of how communication is established and packets are encapsulated/decapsulated onto the underlay network. The same mechanism is used for communication within the VN Green itself. The control node is used as lookup to ask where a specific IP address is located, and then the original packet is encapsulated in an underlay packet destined for the specific node in the fabric. If microsegmentation policy would not allow communication from SGT Employee to SGT Guest, an access list on the edge node would prevent that communication.
An SDA-based topology is very powerful and capable in enabling IBN. The underlay network is set up only once, when the SDA network is created. In addition, there is increased flexibility in adding or removing edge nodes when required (as it is essentially a router in the underlay); all the endpoints are connected to one or more virtual networks. These virtual networks can easily be added or removed from the SDA network, without ever impacting the underlying network. This process of addition or removal can easily be programmed in small building blocks that automation can use. The Cisco DNA center solution is used to deploy and manage an SDA-based network.
Classic VLAN
Although SDA was designed and built for Cisco DNA and is meant to solve some problems on classic campus networks, not all enterprises can readily implement SDA in their network. One of the main reasons is that there are some requirements on the network hardware and topology to enable SDA. This is not only Cisco DNA Center but also a full Identity Services Engine deployment as well as specific hardware such as the Cisco Catalyst 9000 series access switches. Although the Catalyst 3650/3850 are capable of running an SDA network, there are some limitations to that platform, such as an IP services license and a limited number of virtual networks.
However, if you look at an SDA through a conceptual looking glass, it is possible to replicate, with some limitations, the same concepts of SDA using classic VLANs and VRF-Lite. It allows an organization to transform to IBN while also preparing the infrastructure for SDA to take advantage of the concepts within SDA. Table 5-4 provides an overview of the concepts used in SDA compared to the technologies that can be used for IBN within a classic VLAN-based campus network.
Table 5-4 Overview of Design Choices for SDA and Campus Alternative
SDA Network |
Classic Campus Network |
Endpoints are, based on identity, assigned to a virtual network and an SGT. |
Endpoints are assigned to a VLAN and SGT. |
Each virtual network has its own routing table and IP space. |
VRF-Lite can be used to logically separate IP networks and have each VRF instance its own routing table. |
The provisioning of virtual networks is easy because the underlay is created only once and virtual networks can be added and removed without interrupting the underlay. |
With automation tools, it is easy to programmatically add and remove VLANs on uplinks as well as SVIs on the distribution switch. |
Routed links in the underlay are used to remove Spanning Tree and Layer 2 complexities. |
In a collapsed-core campus network, there is no need for Spanning Tree, or a single Spanning Tree instance can be run to prevent loops. |
An underlay network is used to stretch a fabric over multiple physical locations. |
This is not possible without an encapsulation protocol in classic networks. |
A control node is used for lookup of endpoints. |
This is not required as existing protocols like ARP can be used. |
With some limitations (specific conditions) it is possible to enable an IBN design using classic VLAN technologies. Limitations for such a design are a collapsed-core design, ability to assign SGT, and VLANs using a policy server as well as preferably no Spanning Tree or a single Spanning Tree instance. If you take these limitations, Figure 5-4 provides an intentbased design based on a classic collapsed-core campus network topology and VRF-lite.
FIGURE 5-4 Intent Design Based on Classic Campus Collapsed-Core Topology
In this design PC1 and PC2 still have the same IP address but are now assigned into VLAN 201 instead of virtual network green. VLAN 201 is configured on the DSW1 with IP network 10.0.0.0/24 and a default gateway of 10.0.0.1 for the endpoints. The SGTs have remained the same: Employee for PC1 and Guest for PC2.
Just as in the previous example, if PC1 would communicate with www.myserver.com on 209.165.200.225, it would send its TCP SYN packet to the default gateway on DSW1, which in turn would forward it to the Internet, while return traffic would be sent via Ethernet to PC1. ARP is used to map IP addresses into MAC addresses.
The principle of SGT ACLs to restrict traffic within a VRF is the same. In both SDA as well as classic, the SGT ACL is pushed from the policy server to the access switch where the endpoint is connected.
Although the end goal is logically separating traffic between endpoints, using SGT for microsegmentation, there are some limitations and restrictions on a classic VLAN over an SDA topology.
Spanning Tree: It is not preferred to run Spanning Tree on the network, as each change in a VLAN can trigger a Spanning Tree recalculation, resulting in blocked traffic for a period of time. If it is required to run Spanning Tree, then run a single instance of Spanning Tree in MST mode, so that adding a VLAN does not trigger a new STP topology as with per-VLAN Spanning Tree.
Management VLAN and VRF: It is required to have a dedicated VLAN and management VRF to be able to create or remove new VLANs. This VLAN may never be removed from trunks and networks, as this is essentially the underlay network. The automation tool that generates and provides the configuration communicates with all devices in this management VLAN.
Configuration via automation tool only: The configuration of the campus network can only be executed via the automation tool. This is generally true for any environment that there should only be a single truth for the provisioning of a network. In an IBN based on classic VLANs, this is more important as the automation tool will generate the VLAN identifiers automatically based on the virtual networks to be deployed. Although it is common in enterprises to statically define and assign VLANs, in this design that concept needs to be removed for automation to work.
Standardized building blocks only: It is important to only allow standardized building blocks, defined via the automation tool, on the campus network, where the policy is assigned policy-centric using IEEE 802.1x and RADIUS. The building block can then be standardized in such a way that small pieces of configuration code can be generated on-the-fly to create or remove the required compartments on the network. This is realized by creating small repetitive code blocks of command line configuration to be executed, for example, for the creation of a new compartment on the access switch:
vlan $vlanid
name $vrfname
interface $PortChannelUplink
switchport trunk allowed vlan add $vlanid
If the campus network configuration cannot be standardized, it will not be possible to enable an Intent-Based Network using VLANs.
Build your own automation: With SDA, a lot of automation and configuration is executed by Cisco DNA Center in the background. With this design, an automation tool needs to be installed and configured by the network team to provide similar functionality. This can require some custom coding and testing before running the solution in production. This could be Cisco DNA Center with templates or another tool that provides automation functionality.
In summary, both mechanisms (SDA and classic VLAN) work quite similarly, and when you take certain precautions and keep the limitations in mind, it is feasible to start with IBN based on a classic collapsed-core topology. Part 2, “Transforming to an Intent-Based Network,” provides more details on limitations, drawbacks, and when which technology fits best for transforming a campus network to IBN.