Available SDN plug-ins
OKD supports the Kubernetes Container Network Interface (CNI) as the interface between the OKD and Kubernetes. Software defined network (SDN) plug-ins match network capabilities to your networking needs. Additional plug-ins that support the CNI interface can be added as needed.
OpenShift SDN
OpenShift SDN is installed and configured by default as part of the Ansible-based installation procedure. See the OpenShift SDN section for more information.
Third-Party SDN plug-ins
Cisco ACI SDN
The Cisco ACI CNI plug-in for OKD provides integration between the Cisco Application Policy Infrastructure Controller (Cisco APIC) controller and one or more OKD clusters connected to a Cisco ACI fabric.
This integration is implemented across two main functional areas:
The Cisco ACI CNI plug-in extends the ACI fabric capabilities to OKD clusters in order to provide IP address management, networking, load balancing, and security functions for OKD workloads. The Cisco ACI CNI plug-in connects all OKD Pods to the integrated VXLAN overlay provided by Cisco ACI.
The Cisco ACI CNI plug-in models the entire OKD cluster as a VMM domain on the Cisco APIC. This provides APIC with access to the inventory of resources of the OKD cluster, including the number of OKD nodes, OKD namespaces, services, deployments, Pods, their IP and MAC addresses, interfaces they are using, and so on. APIC uses this information to automatically correlate physical and virtual resources in order to simplify operations.
The Cisco ACI CNI plug-in is designed to be transparent for OKD developers and administrators and to integrate seamlessly from an operational standpoint.
For more information, see Cisco ACI CNI Plugin for Red Hat OpenShift Container Platform Architecture and Design Guide.
Flannel SDN
flannel is a virtual networking layer designed specifically for containers. OKD can use it for networking containers instead of the default software-defined networking (SDN) components. This is useful if running OKD within a cloud provider platform that also relies on SDN, such as OpenStack, and you want to avoid encapsulating packets twice through both platforms.
Architecture
OKD runs flannel in host-gw mode, which maps routes from container to container. Each host within the network runs an agent called flanneld, which is responsible for:
Managing a unique subnet on each host
Distributing IP addresses to each container on its host
Mapping routes from one container to another, even if on different hosts
Each flanneld agent provides this information to a centralized etcd store so other agents on hosts can route packets to other containers within the flannel network.
The following diagram illustrates the architecture and data flow from one container to another using a flannel network:
Node 1 would contain the following routes:
default via 192.168.0.100 dev eth0 proto static metric 100
10.1.15.0/24 dev docker0 proto kernel scope link src 10.1.15.1
10.1.20.0/24 via 192.168.0.200 dev eth0
Node 2 would contain the following routes:
default via 192.168.0.200 dev eth0 proto static metric 100
10.1.20.0/24 dev docker0 proto kernel scope link src 10.1.20.1
10.1.15.0/24 via 192.168.0.100 dev eth0
Contiv SDN
Contiv is an open-source networking plug-in module for container infrastructure. Contiv provides an infrastructure for application-oriented network policies and support for a range of multiple networking modes. These include:
A configurable set of overlay networking modes.
Physical networking modes.
Support for industry-leading hardware.
OKD can use Contiv for networking containers instead of the default OpenShift SDN.
Contiv configuration instructions are forthcoming. |
Architecture
Each node within the cluster runs a Contiv agent called netplugin while the master hosts run the Contiv controller (called netmaster), along with supporting control plane components (such as etcd).
Together the components of Contiv (netmaster and netplugin) handle key networking functions for OKD including:
Assigning IP addresses to each container pod on each cluster node.
Creating and managing multiple separate container network instances for different groups of containers.
Configuring the network forwarding layer components for layer two or layer three forwarding.
Configuring and enforcing a range of network policies.
Providing management interfaces (including both CLI and GUI) to configure and manage Contiv-specific features and configurations.
Providing an infrastructure for role-based controls that allow for multiple role-based network operations workflows.
Contiv uses the Container Network Interface (CNI) to interface with OKD and Kubernetes. A key value store based on etcd is used to store Contiv-specific state information. This is in addition to and separate from the instance of etcd used by other components in the system, including OKD itself.
NSX-T SDN
The VMware NSX-T ™ Data Center provides a policy-based overlay network reproducing the complete set of Layer 2 through Layer 7 networking services (such as switching, routing, access control, fire-walling, and QoS) in software for native OKD networking capabilities.
The NSX-T components can be installed and configured as part of the Ansible installation procedure, which integrates an OKD SDN into a data-center-wide NSX-T virtualised network connecting bare metal, virtual machines, and OKD pods. See the Installation section for information on how to install and deploy OKD with VMware NSX-T.
The NSX-T Container Plug-In (NCP) integrates OKD into an NSX-T Manager, which is typically configured for the entire data center.
For information on the NSX-T Data Center architecture and administration, see the VMware NSX-T Data Center v2.4 documentation and the NSX-T NCP configuration guides.
Nuage SDN
Nuage Networks’ SDN solution delivers highly scalable, policy-based overlay networking for pods in an OKD cluster. Nuage SDN can be installed and configured as a part of the Ansible-based installation procedure. See the Advanced Installation section for information on how to install and deploy OKD with Nuage SDN.
Nuage Networks provides a highly scalable, policy-based SDN platform called Virtualized Services Platform (VSP). Nuage VSP uses an SDN Controller, along with the open source Open vSwitch for the data plane.
Nuage uses overlays to provide policy-based networking between OKD and other environments consisting of VMs and bare metal servers. The platform’s real-time analytics engine enables visibility and security monitoring for OKD applications.
Nuage VSP integrates with OKD to allows business applications to be quickly turned up and updated by removing the network lag faced by DevOps teams.
Figure 1. Nuage VSP Integration with OKD
There are two specific components responsible for the integration.
The nuage-openshift-monitor service, which runs as a separate service on the OKD master node.
The vsp-openshift plug-in, which is invoked by the OKD runtime on each of the nodes of the cluster.
Nuage Virtual Routing and Switching software (VRS) is based on open source Open vSwitch and is responsible for the datapath forwarding. The VRS runs on each node and gets policy configuration from the controller.
Nuage VSP Terminology
Figure 2. Nuage VSP Building Blocks
Domains: An organization contains one or more domains. A domain is a single “Layer 3” space. In standard networking terminology, a domain maps to a VRF instance.
Zones: Zones are defined under a domain. A zone does not map to anything on the network directly, but instead acts as an object with which policies are associated such that all endpoints in the zone adhere to the same set of policies.
Subnets: Subnets are defined under a zone. A subnet is a specific Layer 2 subnet within the domain instance. A subnet is unique and distinct within a domain, that is, subnets within a Domain are not allowed to overlap or to contain other subnets in accordance with the standard IP subnet definitions.
VPorts: A VPort is a new level in the domain hierarchy, intended to provide more granular configuration. In addition to containers and VMs, VPorts are also used to attach Host and Bridge Interfaces, which provide connectivity to Bare Metal servers, Appliances, and Legacy VLANs.
Policy Group: Policy Groups are collections of VPorts.
Mapping of Constructs
Many OKD concepts have a direct mapping to Nuage VSP constructs:
Figure 3. Nuage VSP and OKD mapping
A Nuage subnet is not mapped to an OKD node, but a subnet for a particular project can span multiple nodes in OKD.
A pod spawning in OKD translates to a virtual port being created in VSP. The vsp-openshift plug-in interacts with the VRS and gets a policy for that virtual port from the VSD via the VSC. Policy Groups are supported to group multiple pods together that must have the same set of policies applied to them. Currently, pods can only be assigned to policy groups using the operations workflow where a policy group is created by the administrative user in VSD. The pod being a part of the policy group is specified by means of nuage.io/policy-group
label in the specification of the pod.
Integration Components
Nuage VSP integrates with OKD using two main components:
nuage-openshift-monitor
vsp-openshift plugin
nuage-openshift-monitor
nuage-openshift-monitor is a service that monitors the OKD API server for creation of projects, services, users, user-groups, etc.
In case of a Highly Available (HA) OKD cluster with multiple masters, nuage-openshift-monitor process runs on all the masters independently without any change in functionality. |
For the developer workflow, nuage-openshift-monitor also auto-creates VSD objects by exercising the VSD REST API to map OKD constructs to VSP constructs. Each cluster instance maps to a single domain in Nuage VSP. This allows a given enterprise to potentially have multiple cluster installations - one per domain instance for that Enterprise in Nuage. Each OKD project is mapped to a zone in the domain of the cluster on the Nuage VSP. Whenever nuage-openshift-monitor sees an addition or deletion of the project, it instantiates a zone using the VSDK APIs corresponding to that project and allocates a block of subnet for that zone. Additionally, the nuage-openshift-monitor also creates a network macro group for this project. Likewise, whenever nuage-openshift-monitor sees an addition ordeletion of a service, it creates a network macro corresponding to the service IP and assigns that network macro to the network macro group for that project (user provided network macro group using labels is also supported) to enable communication to that service.
For the developer workflow, all pods that are created within the zone get IPs from that subnet pool. The subnet pool allocation and management is done by nuage-openshift-monitor based on a couple of plug-in specific parameters in the master-config file. However the actual IP address resolution and vport policy resolution is still done by VSD based on the domain/zone that gets instantiated when the project is created. If the initial subnet pool is exhausted, nuage-openshift-monitor carves out an additional subnet from the cluster CIDR to assign to a given project.
For the operations workflow, the users specify Nuage recognized labels on their application or pod specification to resolve the pods into specific user-defined zones and subnets. However, this cannot be used to resolve pods in the zones or subnets created via the developer workflow by nuage-openshift-monitor.
In the operations workflow, the administrator is responsible for pre-creating the VSD constructs to map the pods into a specific zone/subnet as well as allow communication between OpenShift entities (ACL rules, policy groups, network macros, and network macro groups). Detailed description of how to use Nuage labels is provided in the Nuage VSP Openshift Integration Guide. |
vsp-openshift Plug-in
The vsp-openshift networking plug-in is called by the OKD runtime on each OKD node. It implements the network plug-in init and pod setup, teardown, and status hooks. The vsp-openshift plug-in is also responsible for allocating the IP address for the pods. In particular, it communicates with the VRS (the forwarding engine) and configures the IP information onto the pod.
Kuryr SDN for OKD
Kuryr (or more specifically Kuryr-Kubernetes) is an SDN solution built using CNI and OpenStack Neutron. Its advantages include being able to use a wide range of Neutron SDN backends and providing interconnectivity between Kubernetes pods and OpenStack virtual machines (VMs).
Kuryr-Kubernetes and OKD integration is primarily designed for OKD clusters running on OpenStack VMs.
OpenStack Deployment Requirements
Kuryr SDN has some requirements regarding configuration of OpenStack it will be using. In particular:
Minimal service set is Keystone and Neutron.
It works with Octavia.
Trunk ports extension must be enabled.
Neutron must use the Open vSwitch firewall driver.
kuryr-controller
kuryr-controller is a service responsible for watching OKD API for new pods being spawned and creating Neutron resources for them. For example, when a pod gets created, kuryr-controller will notice that and call OpenStack Neutron to create a new port. Then, information about that port (or VIF) is saved into the pod’s annotations. kuryr-controller is also able to use precreated port pools for faster pod creation.
Currently, kuryr-controller must be run as a single service instance, so it is modeled in OKD as Deployment
with replicas=1
. It requires access to the underlying OpenStack service APIs.
kuryr-cni
kuryr-cni container serves two roles in Kuryr-Kubernetes deployment. It is responsible for installing and configuring Kuryr CNI script on OKD nodes and running kuryr-daemon service that is networking the Pods
on the host. This means that kuryr-cni container needs to run on every OKD node, so it is modeled as DaemonSet
.
OKD CNI will call the Kuryr CNI script every time a new pod is spawned on or deleted from an OKD host. The script fetches the container ID of the local kuryr-cni from Docker API and executes Kuryr CNI plug-in binary through docker exec passing all the CNI call arguments. The plug-in then calls kuryr-daemon over local HTTP socket, again passing all the parameters.
kuryr-daemon service is responsible for watching for Pod’s
annotations about Neutron VIFs created for them. When CNI request for given Pod
is received daemon either has VIF information in memory already or waits for the annotation to appear on Pod
definition. Once VIF info in known all the networking operations happen.