SD-WAN and the Cloud
Over the past several years, enterprises have started to move heavily into the cloud. This is true with many of the largest enterprises that had been traditionally cloud averse. The strong push to a hybrid work environment, as well as the availability of integrating with Secure Access Service Edge (SASE) architectures, has helped facilitate the migration to the cloud. Additionally, SD-WAN itself not only participates in SASE but also offers various solutions via Cloud OnRamp to assist in deploying into the cloud.
Cisco’s SD-WAN offers three virtual platforms for extending the SD-WAN environment into the cloud: the CSR1000V, the vEdge-cloud, and the Catalyst 8000v. The first two here are approaching the end of life, so the virtual platform of choice moving forward should be the Catalyst 8000v. Both Amazon Web Services (AWS) and Azure offer the Cat8kv with multiple compute options in various zones and regions. As with any cloud virtual deployment, the compute requirements should be carefully considered based on throughput requirements, as well as overall cost. For instance, there are scenarios where doubling the compute for a virtual SD-WAN Edge in AWS doubles the cost of the VM; however, the throughput of the SD-WAN Edge itself is not doubled. In this scenario, it is more cost-effective to double the number of virtual SD-WAN Edges deployed in AWS. Doing so not only doubles the cost and the compute resources but also the total amount of throughput in the virtual environment. Therefore, horizontal scaling in the cloud is not just a useful practice for applications but also for the virtual network functions.
Discussing the cloud deployment is not dissimilar from any other network deployment. Does the environment constitute a greenfield deployment or brownfield? In SD-WAN, that question is even more important than usual because the current Catalyst SD-WAN Manager versions support only greenfield integration for certain Cloud OnRamp features. Therefore, if, for instance, the deployment already has VPCs in AWS that the enterprise wants to deploy SD-WAN virtual routers into, the Catalyst SD-WAN Manager Cloud OnRamp workflows will not work in that scenario. However, whether it is brownfield or greenfield, the overall design will be the same with the differences coming from how the virtual routers are deployed and maintained.
Either way, the virtual cloud SD-WAN Edge is configured from Catalyst SD-WAN Manager via templates just like any other SD-WAN router. The cloud environment itself may be considered to be another site in the SD-WAN environment. As with all routers, there is a finite number of tunnels and throughput the virtual router may support, so the control policy should be defined to ensure those thresholds are not exceeded.
SIG
One of the fundamental pieces of SASE is Secure Internet Gateway (SIG). As applications have moved to the cloud, such as Microsoft Office 365, the traditional paradigm of Internet direct from the data center or centralized location has created bottlenecks in network throughput because the Internet circuits in the centralized location were not deployed for all of the application traffic. As such, enterprises look to offload the Internet application traffic at the remote site. However, this opens new concerns from a security perspective, especially because the data center environment is normally built with security inspection and defense in depth in mind.
How then do we secure the remote site Internet edge, ensuring that application traffic is inspected without additional hardware? The first part of the answer is SIG. With SIG, the SD-WAN Edge will use API calls to the cloud service, commonly Cisco Umbrella or other third-party vendor solutions. The API calls to the cloud service are used by the SD-WAN Edge to create a direct point-to-point encrypted tunnel to the service provider. With the addition of a SIG service route to steer Internet-destined traffic or specific traffic applications across the SIG tunnel, the remote-site application traffic specified by the policy is sent encrypted to the provider. Depending on the policy and service offering, the provider then performs the required inspection on the application traffic. The provider uses NAT Translation of the application traffic so that return traffic for the application is returned to the cloud prior to sending to the remote site over the encrypted tunnel.
As with almost all technologies in networking, SIG supports redundancy. We may configure active/standby tunnel pairs where one tunnel terminates in one zone or region, and the other tunnel in the pair terminates in another zone or region of the provider. Also, the SD-WAN solution probes across the tunnel to monitor state, so the application traffic may be steered through the data center in the event that the SIG pathway is not viable. Up to four active/standby tunnel pairs may be configured on a single SD-WAN Edge to achieve maximum throughput performance for the SIG tunnels as a single tunnel throughput is capped based on the software version.
In Figure 3-8, traffic destined to the enterprise uses the SD-WAN fabric across the various service providers following the various SD-WAN policies; however, traffic that is destined for the Internet follows the encrypted SIG tunnel to the SIG service provider.
Figure 3.8 SD-WAN SIG Traffic
Cloud OnRamp
The Cisco SD-WAN solution offers several enhancements as part of the Cloud OnRamp (CoR) features that facilitate SD-WAN cloud connectivity. Cloud OnRamp for SaaS allows the SD-WAN solution to integrate and properly steer application traffic for select applications that are cloud hosted, such as Office 365, Dropbox, and others. With CoR SaaS, the solution probes the pathway through the DIA circuit from the site, as well as the pathway through the data center via the normal SD-WAN tunnels. Based on the probe performance and configured policy, the SaaS application traffic is steered appropriately between the options. Cloud OnRamp for IaaS handles the provisioning of virtual SD-WAN Edge devices within the cloud provider, AWS, or Azure. As part of the provisioning of the environment, the appropriate VPCs or VNets are configured based on the workflow. Additionally, Software-Defined Cloud Interconnect (SDCI), which evolved from the Cloud OnRamp for Multicloud workflow, allows for the creation of middle-mile topologies. In these workflows, the SD-WAN Edges at remote sites create SD-WAN tunnels to one of the two supported providers, Equinix or Megaport. The provider then provides SD-WAN tunnels direct to the cloud provider over the provider’s infrastructure, reducing the requirement on Internet traversal. All of these Cloud OnRamp options may be used separately or together. This scenario is illustrated in Figure 3-9.
Figure 3.9 SD-WAN Cloud OnRamp for SaaS
In this figure, user application traffic destined for one of the SaaS providers uses the direct Internet access at the SD-WAN site directly. All other traffic follows the SD-WAN fabric pathways. Configuring CoR SaaS within Catalyst SD-WAN Manager is fairly straightforward. From the Administration Settings page within Catalyst SD-WAN Manager, enable Cloud OnRamp for SaaS. Additionally, Cloud Services and Catalyst SD-WAN Analytics must be enabled from the same page. This will require entry of a one-time password and cloud gateway URL that are provided at the time of system setup. After the feature is enabled, you can use the Cloud OnRamp for SaaS configuration pages to view and manage how the SaaS applications should be monitored. Additionally, support for SaaS can be systematically deployed across the environment on a per-site basis as required.
Setting up Cloud OnRamp for IaaS or Cloud OnRamp for Multicloud requires associating the cloud service provider account. As of the 20.9 Catalyst SD-WAN Manager UI, the CoR IaaS functionality is moved into the Cloud OnRamp Multicloud page. Because these are enterprise accounts, it is again recommended to follow best practices and security operations requirements around creating a service account for this part. After the appropriate account has been configured within Catalyst SD-WAN Manager using the Associate Cloud Account workflow, the UI allows the user to associate and tag the VPCs that will then be used within the Intent Management. The Intent Management piece is where the branch-to-cloud connectivity is defined within the workflow.
The same workflows allow the user to create middle-mile connectivity through either Megaport or Equinix via the Software-Defined Cloud Interconnect controls. Just as following the workflows allows cloud SD-WAN Edges to be provisioned in AWS or Azure, these workflows allow the circuits between middle-mile locations to be allocated as required. Figure 3-10 shows the various cloud and on-premises environments that may be interconnected via SDCI.
Figure 3.10 SD-WAN Software Defined Cloud Interconnect
As shown in the figure, with the SDCI working in the middle of the architecture, SD-WAN is capable of creating dynamic tunnels between sites and the nearest colocation facilities. The facilities themselves then provide direct peering to application providers, direct connection to other cloud services, or global connectivity to other regions and colocation facilities.