Software-Defined Networking (SDN) and SDN Security
In the last decade there have been several shifts in networking technologies. Some of these changes are due to the demand of modern applications in very diverse environments and the cloud. This complexity introduces risks, including network configuration errors that can cause significant downtime and network security challenges.
Subsequently, networking functions such as routing, optimization, and security have also changed. The next generation of hardware and software components in enterprise networks must support both the rapid introduction and the rapid evolution of new technologies and solutions. Network infrastructure solutions must keep pace with the business environment and support modern capabilities that help drive simplification within the network.
These elements have fueled the creation of software-defined networking (SDN). SDN was originally created to decouple control from the forwarding functions in networking equipment. This is done to use software to centrally manage and “program” the hardware and virtual networking appliances to perform forwarding.
Traditional Networking Planes
In traditional networking, there are three different “planes” or elements that allow network devices to operate: the management, control, and data planes. Figure 3-1 shows a high-level explanation of each of the planes in traditional networking.
Figure 3-1 The Management, Control, and Data Planes
The control plane has always been separated from the data plane. There was no central brain (or controller) that controlled the configuration and forwarding. Let’s take a look at the example shown in Figure 3-2. Routers, switches, and firewalls were managed by the command-line interface (CLI), graphical user interfaces (GUIs), and custom Tcl scripts. For instance, the firewalls were managed by the Adaptive Security Device Manager (ASDM), while the routers were managed by the CLI.
Figure 3-2 Traditional Network Management Solutions
Each device in Figure 3-2 has its “own brain” and does not really exchange any intelligent information with the rest of the devices.
So What’s Different with SDN?
SDN introduced the notion of a centralized controller. The SDN controller has a global view of the network, and it uses a common management protocol to configure the network infrastructure devices. The SDN controller can also calculate reachability information from many systems in the network and pushes a set of flows inside the switches. The flows are used by the hardware to do the forwarding. Here you can see a clear transition from a distributed “semi-intelligent brain” approach to a “central and intelligent brain” approach.
SDN changed a few things in the management, control, and data planes. However, the big change was in the control and data planes in software-based switches and routers (including virtual switches inside of hypervisors). For instance, the Open vSwitch project started some of these changes across the industry.
SDN provides numerous benefits in the management plane. These benefits are in both physical switches and virtual switches. SDN is now widely adopted in data centers. A great example of this is Cisco ACI.
Introduction to the Cisco ACI Solution
Cisco ACI provides the ability to automate setting networking policies and configurations in a very flexible and scalable way. Figure 3-3 illustrates the concept of a centralized policy and configuration management in the Cisco ACI solution.
The Cisco ACI scenario shown in Figure 3-3 uses a leaf-and-spine topology. Each leaf switch is connected to every spine switch in the network with no interconnection between leaf switches or spine switches.
Figure 3-3 Cisco APIC Configuration and Policy Management
The leaf switches have ports connected to traditional Ethernet devices (for example, servers, firewalls, routers, and so on). Leaf switches are typically deployed at the edge of the fabric. These leaf switches provide the Virtual Extensible LAN (VXLAN) tunnel endpoint (VTEP) function. VXLAN is a network virtualization technology that leverages an encapsulation technique (similar to VLANs) to encapsulate Layer 2 Ethernet frames within UDP packets (over UDP port 4789, by default).
In Cisco ACI, the IP address that represents the leaf VTEP is called the physical tunnel endpoint (PTEP). The leaf switches are responsible for routing or bridging tenant packets and for applying network policies.
Spine nodes interconnect leaf devices, and they can also be used to establish connections from a Cisco ACI pod to an IP network or to interconnect multiple Cisco ACI pods. Spine switches store all the endpoint-to-VTEP mapping entries. All leaf nodes connect to all spine nodes within a Cisco ACI pod. However, no direct connectivity is allowed between spine nodes or between leaf nodes.
The APIC can be considered a policy and a topology manager. APIC manages the distributed policy repository responsible for the definition and deployment of the policy-based configuration of the Cisco ACI infrastructure. APIC also manages the topology and inventory information of all devices within the Cisco ACI pod.
The following are additional functions of the APIC:
The APIC “observer” function monitors the health, state, and performance information of the Cisco ACI pod.
The “boot director” function is in charge of the booting process and firmware updates of the spine switches, leaf switches, and the APIC components.
The “appliance director” APIC function manages the formation and control of the APIC appliance cluster.
The “virtual machine manager (VMM)” is an agent between the policy repository and a hypervisor. The VMM interacts with hypervisor management systems (for example, VMware vCenter).
The “event manager” manages and stores all the events and faults initiated from the APIC and the Cisco ACI fabric nodes.
The “appliance element” maintains the inventory and state of the local APIC appliance.
VXLAN and Network Overlays
Modern networks and data centers need to provide load balancing, better scalability, elasticity, and faster convergence. Many organizations use the overlay network model. Deploying an overlay network allows you to tunnel Layer 2 Ethernet packets with different encapsulations over a Layer 3 network. The overlay network uses “tunnels” to carry the traffic across the Layer 3 fabric. This solution also needs to allow the “underlay” to separate network flows between different “tenants” (administrative domains). The solution also needs to switch packets within the same Layer 2 broadcast domain, route traffic between Layer 3 broadcast domains, and provide IP separation, traditionally done via virtual routing and forwarding (VRF).
There have been multiple IP tunneling mechanisms introduced throughout the years. The following are a few examples of tunneling mechanisms:
Virtual Extensible LAN (VXLAN)
Network Virtualization using Generic Routing Encapsulation (NVGRE)
Stateless Transport Tunneling (STT)
Generic Network Virtualization Encapsulation (GENEVE)
All of the aforementioned tunneling protocols carry an Ethernet frame inside an IP frame. The main difference between them is in the type of the IP frame used. For instance, VXLAN uses UDP, and STT uses TCP.
The use of UDP in VXLAN enables routers to apply hashing algorithms on the outer UDP header to load balance network traffic. Network traffic that is riding the overlay network tunnels is load balanced over multiple links using equal-cost multi-path routing (ECMP). This introduces a better solution compared to traditional network designs. In traditional network designs, access switches connect to distribution switches. This causes redundant links to block due to spanning tree.
VXLAN uses an identifier or a tag that represents a logical segment that is called the VXLAN Network Identifier (VNID). The logical segment identified with the VNID is a Layer 2 broadcast domain that is tunneled over the VTEP tunnels.
Figure 3-4 shows an example of an overlay network that provides Layer 2 capabilities.
Figure 3-4 Overlay Network Providing Layer 2 Capabilities
Figure 3-5 shows an example of an overlay network that provides Layer 3 routing capabilities.
Figure 3-5 Overlay Network Providing Layer 3 Routing Capabilities
Figure 3-6 illustrates the VXLAN frame format for your reference.
Figure 3-6 VXLAN Frame Format
Micro-Segmentation
For decades, servers were assigned subnets and VLANs. Sounds pretty simple, right? Well, this introduced a lot of complexities because application segmentation and policies were physically restricted to the boundaries of the VLAN within the same data center (or even in “the campus”). In virtual environments, the problem became harder. Nowadays applications can move around between servers to balance loads for performance or high availability upon failures. They also can move between different data centers and even different cloud environments.
Traditional segmentation based on VLANs constrains you to maintain the policies of which application needs to talk to which application (and who can access such applications) in centralized firewalls. This is ineffective because most traffic in data centers is now “East-West” traffic. A lot of that traffic does not even hit the traditional firewall. In virtual environments, a lot of the traffic does not even leave the physical server.
Let’s define what people refer to as “East-West” traffic and “North-South” traffic. “East-West” traffic is network traffic between servers (virtual servers or physical servers, containers, and so on).
“North-South” traffic is network traffic flowing in and outside the data center. Figure 3-7 illustrates the concepts of “East-West” and “North-South” traffic.
Figure 3-7 “East-West” and “North-South” Traffic
Many vendors have created solutions where policies applied to applications are independent from the location or the network tied to the application.
For example, let’s suppose that you have different applications running in separate VMs and those applications also need to talk to a database (as shown in Figure 3-8).
Figure 3-8 Applications in VMs
You need to apply policies to restrict if application A needs or does not need to talk to application B, or which application should be able to talk to the database. These policies should not be bound by which VLAN or IP subnet the application belongs to and whether it is in the same rack or even in the same data center. Network traffic should not make multiple trips back and forth between the applications and centralized firewalls to enforce policies between VMs.
Containers make this a little harder because they move and change more often. Figure 3-9 illustrates a high-level representation of applications running inside of containers (for example, Docker containers).
Figure 3-9 Applications in Containers
The ability to enforce network segmentation in those environments is called “micro-segmentation.” Micro-segmentation is at the VM level or between containers regardless of a VLAN or a subnet. Micro-segmentation solutions need to be “application aware.” This means that the segmentation process starts and ends with the application itself.
Most micro-segmentation environments apply a “zero-trust model.” This model dictates that users cannot talk to applications, and applications cannot talk to other applications unless a defined set of policies permits them to do so.
Open-Source Initiatives
Several open-source projects are trying to provide micro-segmentation and other modern networking benefits. Examples include the following:
Neutron from OpenStack
Open vSwitch (OVS)
Open Virtual Network (OVN)
OpenDaylight (ODL)
Open Platform for Network Function Virtualization (OPNFV)
Contiv
The concept of SDN is very broad, and every open-source provider and commercial vendor takes it in a different direction. The networking component of OpenStack is called Neutron. Neutron is designed to provide “networking as a service” in private, public, and hybrid cloud environments. Other OpenStack components, such as Horizon (Web UI) and Nova (compute service), interact with Neutron using a set of APIs to configure the networking services. Neutron uses plug-ins to deliver advanced networking capabilities and allow third-party vendor integration. Neutron has two main components: the neutron server and a database that handles persistent storage and plug-ins to provide additional services. Additional information about Neutron and OpenStack can be found at https://docs.openstack.org/neutron/latest.
OVN was originally created by the folks behind Open vSwitch (OVS) for the purpose of bringing an open-source solution for virtual network environments and SDN. Open vSwitch is an open-source implementation of a multilayer virtual switch inside the hypervisor.
OVN is often used in OpenStack implementations with the use of OVS. You can also use OVN with the OpenFlow protocol. OpenStack Neutron uses OVS as the default “control plane.”
OpenDaylight (ODL) is another popular open-source project that is focused on the enhancement of SDN controllers to provide network services across multiple vendors. OpenDaylight participants also interact with the OpenStack Neutron project and attempt to solve the existing inefficiencies.
OpenDaylight interacts with Neutron via a northbound interface and manages multiple interfaces southbound, including the Open vSwitch Database Management Protocol (OVSDB) and OpenFlow.
So, what is a northbound and southbound API? In an SDN architecture, southbound APIs are used to communicate between the SDN controller and the switches and routers within the infrastructure. These APIs can be open or proprietary.
Southbound APIs enable SDN controllers to dynamically make changes based on real-time demands and scalability needs. OpenFlow and Cisco OpFlex provide southbound API capabilities.
Northbound APIs (SDN northbound APIs) are typically RESTful APIs that are used to communicate between the SDN controller and the services and applications running over the network. Such northbound APIs can be used for the orchestration and automation of the network components to align with the needs of different applications via SDN network programmability. In short, northbound APIs are basically the link between the applications and the SDN controller. In modern environments, applications can tell the network devices (physical or virtual) what type of resources they need and, in turn, the SDN solution can provide the necessary resources to the application.
Cisco has the concept of intent-based networking. On different occasions, you may see northbound APIs referred to as “intent-based APIs.”
More About Network Function Virtualization
Network virtualization is used for logical groupings of nodes on a network. The nodes are abstracted from their physical locations so that VMs and any other assets can be managed as if they are all on the same physical segment of the network. This is not a new technology. However, it is still one that is key in virtual environments where systems are created and moved despite their physical location.
Network Functions Virtualization (NFV) is a technology that addresses the virtualization of Layer 4 through Layer 7 services. These include load balancing and security capabilities such as firewall-related features. In short, with NFV, you convert certain types of network appliances into VMs. NFV was created to address the inefficiencies that were introduced by virtualization.
NFV allows you to create a virtual instance of a virtual node such as a firewall that can be deployed where it is needed, in a flexible way that’s similar to what you do with a traditional VM.
Open Platform for Network Function Virtualization (OPNFV) is an open-source solution for NFV services. It aims to be the base infrastructure layer for running virtual network functions. You can find detailed information about OPNFV at opnfv.org.
NFV nodes such as virtual routers and firewalls need an underlying infrastructure:
A hypervisor to separate the virtual routers, switches, and firewalls from the underlying physical hardware. The hypervisor is the underlying virtualization platform that allows the physical server (system) to operate multiple VMs (including traditional VMs and network-based VMs).
A virtual forwarder to connect individual instances.
A network controller to control all of the virtual forwarders in the physical network.
A VM manager to manage the different network-based VMs.
Figure 3-10 demonstrates the high-level components of the NFV architecture.
Figure 3-10 NFV Architecture
Several NFV infrastructure components have been created in open community efforts. On the other hand, traditionally, the actual integration has so far remained a “private” task. You’ve either had to do it yourself, outsource it, or buy a pre-integrated system from some vendor, keeping in mind that the systems integration undertaken is not a one-time task. OPNFV was created to change the NFV ongoing integration task from a private solution into an open community solution.
NFV MANO
NFV changes the way networks are managed. NFV management and network orchestration (MANO) is a framework and working group within the European Telecommunications Standards Institute (ETSI) Industry Specification Group for NFV (ETSI ISG NFV). NFV MANO is designed to provide flexible onboarding of network components. NFV MANO is divided into the three functional components listed in Figure 3-11.
Figure 3-11 NFV MANO Functional Components
The NFV MANO architecture is integrated with open application program interfaces (APIs) in the existing systems. The MANO layer works with templates for standard VNFs. It allows implementers to pick and choose from existing NFV resources to deploy their platform or element.
Contiv
Contiv is an open-source project that allows you to deploy micro-segmentation policy-based services in container environments. It offers a higher level of networking abstraction for microservices by providing a policy framework. Contiv has built-in service discovery and service routing functions to allow you to scale out services.
With Contiv you can assign an IP address to each container. This feature eliminates the need for host-based port NAT. Contiv can operate in different network environments such as traditional Layer 2 and Layer 3 networks, as well as overlay networks.
Contiv can be deployed with all major container orchestration platforms (or schedulers) such as Kubernetes and Docker Swarm. For instance, Kubernetes can provide compute resources to containers and then Contiv provides networking capabilities.
The Netmaster and Netplugin (Contiv host agent) are the two major components in Contiv. Figure 3-12 illustrates how the Netmaster and the Netplugin interact with all the underlying components of the Contiv solution.
Figure 3-12 Contiv Netmaster and Netplugin (Contiv Host Agent) Components
ThousandEyes Integration
ThousandEyes, the leading network intelligence Software as a Service (SaaS) platform, has taken its partnership with Cisco to the next level. The integration of ThousandEyes into the Cisco Nexus 9000 Series data center switches, powered by NX-OS/Data Center Network Manager (DCNM), and its integration into Cisco ACI fabrics, delivers a powerful combination of network visibility and control.
With the Cisco ThousandEyes Enterprise Agent (TEA), users can now monitor their network’s performance from a global perspective, utilizing a range of tests to assess BGP routing, DNS resolution, browser response times, network pathing and connectivity, routing status, and VoIP streaming quality. This integration offers unparalleled insight and control to help organizations optimize their network performance. ThousandEyes provides numerous monitoring capabilities including the following:
API Monitoring
BGP Monitoring
CDN Monitoring
Customer Digital Experience
DDoS Monitoring
DNS Monitoring
Enterprise Digital Experience
Hybrid WAN Monitoring
Network Device Monitoring
Network Monitoring
IaaS Monitoring
ISP Monitoring
Multi-cloud Monitoring
SaaS Monitoring
SD-WAN Monitoring
VPN Monitoring
Website Monitoring
Wi-Fi and LAN Monitoring
Cisco Digital Network Architecture (DNA)
Cisco DNA is a solution created by Cisco that is often referred to as the “intent-based networking” solution. Cisco DNA provides automation and assurance services across campus networks, wide area networks (WANs), and branch networks. Cisco DNA is based on an open and extensible platform and provides the policy, automation, and analytics capabilities, as illustrated in Figure 3-13.
Figure 3-13 Cisco DNA High-Level Architecture
The heart of the Cisco DNA solution is Cisco DNA Center (DNAC). DNAC is a command-and-control element that provides centralized management via dashboards and APIs. Figure 3-14 shows one of the many dashboards of Cisco DNA Center (the Network Hierarchy dashboard).
Figure 3-14 Cisco DNA Center Network Hierarchy Dashboard
Cisco DNA Center can be integrated with external network and security services such as the Cisco Identity Services Engine (ISE). Figure 3-15 shows how the Cisco ISE is configured as an authentication, authorization, and accounting (AAA) server in the Cisco DNA Center Network Settings screen.
Figure 3-15 Cisco DNA Center Integration with Cisco ISE for AAA Services
Cisco DNA Policies
The following are the policies you can create in the Cisco DNA Center:
Group-based access control policies
IP-based access control policies
Application access control policies
Traffic copy policies
Figure 3-16 shows the Cisco DNA Center Policy Overview dashboard matrix visualization. Here, you can see the number of active policies based on the security groups, Cisco Identity Services Engine (ISE) profiles, and Cisco Secure Network Analytics (formerly known as Stealthwatch) host groups. Using the dashboard shown in Figure 3-16, you can create new policies.
Figure 3-16 Cisco DNA Center Policy Overview Dashboard
Figure 3-17 shows the policy analytics for the ISE profiles. Cisco DNA Center empowers you with intelligence and analytics to make informed decisions about your network. With its visual representation of communication between assets, you can easily create group-based policies, evaluate the effects of new access controls, and determine the precise protocols that should be included in your policies. This comprehensive solution provides you with a clear understanding of your network, enabling you to take control and optimize its performance.
Figure 3-17 Policy Analytics for the ISE Profiles in DNA Center
Figure 3-18 shows the policy matrix. The matrix view enables you to have a comprehensive overview of all the source and destination policies and grasp the overall policy structure. You can view, create, and modify access control policies from the policy matrix view itself.
Figure 3-18 DNA Center Policy Matrix
The matrix view has two components:
Source axis: The vertical axis displays a list of all the source security groups.
Destination axis: The horizontal axis presents a list of all the destination security groups.
By hovering over a cell, you can view the policy for a specific combination of source and destination security groups. The color of a cell represents the policy that is in effect, with the following color coding:
Allow: Green
Block: Red
Custom: Gold
Default: Gray
Cisco DNA Group-Based Access Control Policy
When you configure group-based access control policies, you need to integrate the Cisco ISE with Cisco DNA Center, as you learned previously in this chapter. In Cisco ISE, you configure the work process setting as “Single Matrix” so that there is only one policy matrix for all devices in the TrustSec network. You will learn more about Cisco TrustSec and Cisco ISE in Chapter 4, “Authentication, Authorization, Accounting (AAA) and Identity Management.”
Depending on your organization’s environment and access requirements, you can segregate your groups into different virtual networks to provide further segmentation.
After Cisco ISE is integrated in Cisco DNA Center, the scalable groups that exist in Cisco ISE are propagated to Cisco DNA Center. If a scalable group that you need does not exist, you can create it in Cisco ISE.
Cisco DNA Center has the concept of access control contracts. A contract specifies a set of rules that allow or deny network traffic based on such traffic matching particular protocols or ports. Figure 3-19 shows a new contract being created in Cisco DNA Center to allow SSH access (TCP port 22).
To create a contract, navigate to Policy > Group-Based Access Control > Access Contract and click Add Contract. The dialog box shown in Figure 3-19 will be displayed.
Figure 3-19 Adding a Cisco DNA Center Contract
Figure 3-20 shows an example of how to create a group-based access control policy.
Figure 3-20 Adding a Cisco DNA Center Group-Based Access Control Policy
In Figure 3-20, an access control policy named omar_policy_1 is configured to deny traffic from all users and related devices in the group called Guests to any user or device in the Finance group.
Cisco DNA IP-Based Access Control Policy
You can also create IP-based access control policies in Cisco DNA Center. To create IP-based access control policies, navigate to Policy > IP Based Access Control > IP Based Access Control Policies, as shown in Figure 3-21.
Figure 3-21 Adding a Cisco DNA Center IP-Based Access Control Policy
In the example shown in Figure 3-21, a policy is configured to permit Omar’s PC to communicate with h4cker.org.
You can also associate these policies to specific wireless SSIDs. The corp-net SSID is associated to the policy entry in Figure 3-21.
Cisco DNA Application Policies
Application policies can be configured in Cisco DNA Center to provide Quality of Service (QoS) capabilities. The following are the Application Policy components you can configure in Cisco DNA Center:
Applications
Application sets
Application policies
Queuing profiles
Applications in Cisco DNA Center are the software programs or network signaling protocols that are being used in your network.
Applications can be grouped into logical groups called application sets. These application sets can be assigned a business relevance within a policy.
You can also map applications to industry standard-based traffic classes, as defined in RFC 4594.
Cisco DNA Traffic Copy Policy
You can also use an Encapsulated Remote Switched Port Analyzer (ERSPAN) configuration in Cisco DNA Center so that the IP traffic flow between two entities is copied to a given destination for monitoring or troubleshooting. In order for you to configure ERSPAN using Cisco DNA Center, you need to create a traffic copy policy that defines the source and destination of the traffic flow you want to copy. To configure a traffic copy policy, navigate to Policy > Traffic Copy > Traffic Copy Policies, as shown in Figure 3-22.
Figure 3-22 Adding a Traffic Copy Policy
You can also define a traffic copy contract that specifies the device and interface where the copy of the traffic is sent.
Cisco DNA Center Assurance Solution
The Cisco DNA Center Assurance solution allows you to get contextual visibility into network functions with historical, real-time, and predictive insights across users, devices, applications, and the network. The goal is to provide automation capabilities to reduce the time spent on network troubleshooting.
Figure 3-23 shows the Cisco DNA Center Assurance Overall Health dashboard.
Figure 3-23 The Cisco DNA Center Assurance Overall Health Dashboard
The Cisco DNA Center Assurance solution allows you to investigate different networkwide (global) issues, as shown in Figure 3-24.
Figure 3-24 The Cisco DNA Center Assurance Top 10 Issues Types
The Cisco DNA Center Assurance solution also allows you to configure sensors to test the health of wireless networks. A wireless network includes access point (AP) radios, WLAN configurations, and wireless network services. Sensors can be dedicated or on-demand sensors. A dedicated sensor is when an AP is converted into a sensor, and it stays in sensor mode (is not used by wireless clients) unless it is manually converted back into AP mode. An on-demand sensor is when an AP is temporarily converted into a sensor to run tests. After the tests are complete, the sensor goes back to AP mode. Figure 3-25 shows statistics about wired and wireless clients.
Figure 3-25 The Cisco DNA Center Wireless and Wired Client Statistics
Cisco DNA Center APIs
One of the key benefits of the Cisco DNA Center is the comprehensive available APIs (aka Intent APIs). The Intent APIs are northbound REST APIs that expose specific capabilities of the Cisco DNA Center platform. These APIs provide policy-based abstraction of business intent, allowing you to focus on an outcome to achieve instead of struggling with the mechanisms that implement that outcome. The APIs conform to the REST API architectural style and are simple, extensible, and secure to use.
Cisco DNA Center also has several integration APIs. These integration capabilities are part of westbound interfaces. Cisco DNA Center also allows administrators to manage their non-Cisco devices. Multivendor support comes to Cisco DNA Center through the use of an SDK that can be used to create device packages for third-party devices. A device package enables Cisco DNA Center to communicate with third-party devices by mapping Cisco DNA Center features to their southbound protocols.
Cisco DNA Center also has several events and notifications services that allow you to capture and forward Cisco DNA Assurance and Automation (SWIM) events to third-party applications via a webhook URL.
All Cisco DNA Center APIs conform to the REST API architectural styles.
You can view information about all the Cisco DNA Center APIs by clicking the Platform tab and navigating to Developer Toolkit > APIs.
Cisco is always expanding the capabilities of the Cisco DNA Center APIs. Please study and refer to the following API documentation and tutorials for the most up-to-date capabilities: https://developer.cisco.com/docs/dna-center and https://developer.cisco.com/site/dna-center-rest-api.
Cisco DNA Security Solution
The Cisco DNA Security solution supports several other security products and operations that allow you to detect and contain cybersecurity threats. One of the components of the Cisco DNA Security solution is the Encrypted Traffic Analytics (ETA) solution. Cisco ETA allows you to detect security threats in encrypted traffic without decrypting the packets. It is able to do this by using machine learning and other capabilities. To use Encrypted Traffic Analytics, you need one of the following network devices along with Cisco Secure Network Analytics (formerly known as Stealthwatch):
Catalyst 9000 switches
ASR 1000 Series routers
ISR 4000 Series routers
CSR 1000V Series virtual routers
ISR 1000 Series routers
Catalyst 9800 Series wireless controllers
Cisco Secure Network Analytics provides network visibility and security analytics to rapidly detect and contain threats. You will learn more about the Cisco Secure Network Analytics solution in Chapter 5, “Network Visibility and Segmentation.”
As you learned in previous sections of this chapter, the Cisco TrustSec solution and Cisco ISE enable you to control networkwide access, enforce security policies, and help meet compliance requirements.
Cisco DNA Multivendor Support
Cisco DNA Center now allows customers to manage their non-Cisco devices. Multivendor support comes to Cisco DNA Center through the use of an SDK that can be used to create device packages for third-party devices. A device package enables Cisco DNA Center to communicate with third-party devices by mapping Cisco DNA Center features to their southbound protocols. Multivendor support capabilities are based on southbound interfaces. These interfaces interact directly with network devices by means of CLI, SNMP, or NETCONF.