- Postinstallation network configuration
- Cluster Network Operator configuration
- Enabling the cluster-wide proxy
- Setting DNS to private
- Configuring ingress cluster traffic
- Configuring the node port service range
- Configuring IPsec encryption
- Configuring network policy
- Optimizing routing
- Postinstallation OpenStack network configuration
- Configuring application access with floating IP addresses
- Kuryr ports pools
- Adjusting Kuryr ports pool settings in active deployments on OpenStack
- Enabling OVS hardware offloading
- Attaching an OVS hardware offloading network
- Enabling IPv6 connectivity to pods on OpenStack
- Adding IPv6 connectivity to pods on OpenStack
- Create pods that have IPv6 connectivity on OpenStack
Postinstallation network configuration
After installing OKD, you can further expand and customize your network to your requirements.
Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group and these fields cannot be changed:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
After cluster installation, you cannot modify the fields listed in the previous section. |
Enabling the cluster-wide proxy
The Proxy
object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy
object is still generated but it will have a nil spec
. For example:
apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
name: cluster
spec:
trustedCA:
name: ""
status:
A cluster administrator can configure the proxy for OKD by modifying this cluster
Proxy
object.
Only the |
Prerequisites
Cluster administrator permissions
OKD
oc
CLI tool installed
Procedure
Create a config map that contains any additional CA certificates required for proxying HTTPS connections.
You can skip this step if the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
Create a file called
user-ca-bundle.yaml
with the following contents, and provide the values of your PEM-encoded certificates:apiVersion: v1
data:
ca-bundle.crt: | (1)
<MY_PEM_ENCODED_CERTS> (2)
kind: ConfigMap
metadata:
name: user-ca-bundle (3)
namespace: openshift-config (4)
1 This data key must be named ca-bundle.crt
.2 One or more PEM-encoded X.509 certificates used to sign the proxy’s identity certificate. 3 The config map name that will be referenced from the Proxy
object.4 The config map must be in the openshift-config
namespace.Create the config map from this file:
$ oc create -f user-ca-bundle.yaml
Use the
oc edit
command to modify theProxy
object:$ oc edit proxy/cluster
Configure the necessary fields for the proxy:
apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
name: cluster
spec:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
readinessEndpoints:
- http://www.google.com (4)
- https://www.google.com
trustedCA:
name: user-ca-bundle (5)
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
.2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http
orhttps
. Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to usehttps
but they only supporthttp
. This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens forhttps
connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses.3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
orhttpsProxy
fields are set.4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy
andhttpsProxy
values to status.5 A reference to the config map in the openshift-config
namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.Save the file to apply the changes.
Setting DNS to private
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
DNS
custom resource for your cluster:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: "2019-10-25T18:27:09Z"
generation: 2
name: cluster
resourceVersion: "37966"
selfLink: /apis/config.openshift.io/v1/dnses/cluster
uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
spec:
baseDomain: <base_domain>
privateZone:
tags:
Name: <infrastructure_id>-int
kubernetes.io/cluster/<infrastructure_id>: owned
publicZone:
id: Z2XXXXXXXXXXA4
status: {}
Note that the
spec
section contains both a private and a public zone.Patch the
DNS
custom resource to remove the public zone:$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
dns.config.openshift.io/cluster patched
Because the Ingress Controller consults the
DNS
definition when it createsIngress
objects, when you create or modifyIngress
objects, only private records are created.DNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
DNS
custom resource for your cluster and confirm that the public zone was removed:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: "2019-10-25T18:27:09Z"
generation: 2
name: cluster
resourceVersion: "37966"
selfLink: /apis/config.openshift.io/v1/dnses/cluster
uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
spec:
baseDomain: <base_domain>
privateZone:
tags:
Name: <infrastructure_id>-int
kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned
status: {}
Configuring ingress cluster traffic
OKD provides the following methods for communicating from outside the cluster with services running in the cluster:
If you have HTTP/HTTPS, use an Ingress Controller.
If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller.
Otherwise, use a load balancer, an external IP, or a node port.
Method | Purpose |
---|---|
Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header. | |
Automatically assign an external IP by using a load balancer service | Allows traffic to non-standard ports through an IP address assigned from a pool. |
Allows traffic to non-standard ports through a specific IP address. | |
Expose a service on all nodes in the cluster. |
Configuring the node port service range
As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports.
The default port range is 30000-32767
. You can never reduce the port range, even if you first expand it beyond the default range.
Prerequisites
- Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to
30000-32900
, the inclusive port range of32768-32900
must be allowed by your firewall or packet filtering configuration.
Expanding the node port range
You can expand the node port range for the cluster.
Prerequisites
Install the OpenShift CLI (
oc
).Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To expand the node port range, enter the following command. Replace
<port>
with the largest port number in the new range.$ oc patch network.config.openshift.io cluster --type=merge -p \
'{
"spec":
{ "serviceNodePortRange": "30000-<port>" }
}'
You can alternatively apply the following YAML to update the node port range:
apiVersion: config.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
serviceNodePortRange: “30000-<port>”
Example output
network.config.openshift.io/cluster patched
To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply.
$ oc get configmaps -n openshift-kube-apiserver config \
-o jsonpath="{.data['config\.yaml']}" | \
grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]'
Example output
"service-node-port-range":["30000-33000"]
Configuring IPsec encryption
With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes network plugin travels through an encrypted tunnel.
IPsec is disabled by default.
Prerequisites
- Your cluster must use the OVN-Kubernetes network plugin.
Enabling pod-to-pod IPsec encryption
As a cluster administrator, you can enable pod-to-pod IPsec encryption after cluster installation.
Prerequisites
Install the OpenShift CLI (
oc
).You are logged in to the cluster as a user with
cluster-admin
privileges.You have reduced the size of your cluster MTU by
46
bytes to allow for the overhead of the IPsec ESP header.
Procedure
To enable IPsec encryption, enter the following command:
$ oc patch networks.operator.openshift.io cluster --type=merge \
-p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}'
Verifying that IPsec is enabled
As a cluster administrator, you can verify that IPsec is enabled.
Verification
To find the names of the OVN-Kubernetes data plane pods, enter the following command:
$ oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node
Example output
ovnkube-node-5xqbf 8/8 Running 0 28m
ovnkube-node-6mwcx 8/8 Running 0 29m
ovnkube-node-ck5fr 8/8 Running 0 31m
ovnkube-node-fr4ld 8/8 Running 0 26m
ovnkube-node-wgs4l 8/8 Running 0 33m
ovnkube-node-zfvcl 8/8 Running 0 34m
Verify that IPsec is enabled on your cluster:
$ oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec
where:
<XXXXX>
Specifies the random sequence of letters for a pod from the previous step.
Example output
true
Configuring network policy
As a cluster administrator or project administrator, you can configure network policies for a project.
About network policy
In a cluster using a network plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy
objects. In OKD 4.14, OpenShift SDN supports using network policy in its default network isolation mode.
Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules. Network policies cannot block traffic from localhost or from their resident nodes. |
By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy
objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy
objects within their own project.
If a pod is matched by selectors in one or more NetworkPolicy
objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy
objects. A pod that is not selected by any NetworkPolicy
objects is fully accessible.
A network policy applies to only the TCP, UDP, and SCTP protocols. Other protocols are not affected.
The following example NetworkPolicy
objects demonstrate supporting different scenarios:
Deny all traffic:
To make a project deny by default, add a
NetworkPolicy
object that matches all pods but accepts no traffic:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-by-default
spec:
podSelector: {}
ingress: []
Only allow connections from the OKD Ingress Controller:
To make a project allow only connections from the OKD Ingress Controller, add the following
NetworkPolicy
object.apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
Only accept connections from pods within a project:
To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following
NetworkPolicy
object:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
Only allow HTTP and HTTPS traffic based on pod labels:
To enable only HTTP and HTTPS access to the pods with a specific label (
role=frontend
in following example), add aNetworkPolicy
object similar to the following:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-and-https
spec:
podSelector:
matchLabels:
role: frontend
ingress:
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
Accept connections by using both namespace and pod selectors:
To match network traffic by combining namespace and pod selectors, you can use a
NetworkPolicy
object similar to the following:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-pod-and-namespace-both
spec:
podSelector:
matchLabels:
name: test-pods
ingress:
- from:
- namespaceSelector:
matchLabels:
project: project_name
podSelector:
matchLabels:
name: test-pods
NetworkPolicy
objects are additive, which means you can combine multiple NetworkPolicy
objects together to satisfy complex network requirements.
For example, for the NetworkPolicy
objects defined in previous samples, you can define both allow-same-namespace
and allow-http-and-https
policies within the same project. Thus allowing the pods with the label role=frontend
, to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80
and 443
from pods in any namespace.
Using the allow-from-router network policy
Use the following NetworkPolicy
to allow external traffic regardless of the router configuration:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-router
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""(1)
podSelector: {}
policyTypes:
- Ingress
1 | policy-group.network.openshift.io/ingress:”” label supports both OpenShift-SDN and OVN-Kubernetes. |
Using the allow-from-hostnetwork network policy
Add the following allow-from-hostnetwork
NetworkPolicy
object to direct traffic from the host network pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-hostnetwork
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/host-network: ""
podSelector: {}
policyTypes:
- Ingress
Example NetworkPolicy object
The following annotates an example NetworkPolicy object:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-27107 (1)
spec:
podSelector: (2)
matchLabels:
app: mongodb
ingress:
- from:
- podSelector: (3)
matchLabels:
app: app
ports: (4)
- protocol: TCP
port: 27017
1 | The name of the NetworkPolicy object. |
2 | A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. |
3 | A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. |
4 | A list of one or more destination ports on which to accept traffic. |
Creating a network policy using the CLI
To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.
If you log in with a user with the |
Prerequisites
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN.You installed the OpenShift CLI (
oc
).You are logged in to the cluster with a user with
admin
privileges.You are working in the namespace that the network policy applies to.
Procedure
Create a policy rule:
Create a
<policy_name>.yaml
file:$ touch <policy_name>.yaml
where:
<policy_name>
Specifies the network policy file name.
Define a network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-by-default
spec:
podSelector:
ingress: []
Allow ingress from all pods in the same namespace
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
Allow ingress traffic to one pod from a particular namespace
This policy allows traffic to pods labelled
pod-a
from pods running innamespace-y
.kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-traffic-pod
spec:
podSelector:
matchLabels:
pod: pod-a
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: namespace-y
To create the network policy object, enter the following command:
$ oc apply -f <policy_name>.yaml -n <namespace>
where:
<policy_name>
Specifies the network policy file name.
<namespace>
Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy.networking.k8s.io/deny-by-default created
If you log in to the web console with |
Configuring multitenant isolation by using network policy
You can configure your project to isolate it from pods and services in other project namespaces.
Prerequisites
Your cluster uses a network plugin that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN. This mode is the default for OpenShift SDN.You installed the OpenShift CLI (
oc
).You are logged in to the cluster with a user with
admin
privileges.
Procedure
Create the following
NetworkPolicy
objects:A policy named
allow-from-openshift-ingress
.$ cat << EOF| oc create -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
podSelector: {}
policyTypes:
- Ingress
EOF
policy-group.network.openshift.io/ingress: “”
is the preferred namespace selector label for OpenShift SDN. You can use thenetwork.openshift.io/policy-group: ingress
namespace selector label, but this is a legacy label.A policy named
allow-from-openshift-monitoring
:$ cat << EOF| oc create -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring
podSelector: {}
policyTypes:
- Ingress
EOF
A policy named
allow-same-namespace
:$ cat << EOF| oc create -f -
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
EOF
A policy named
allow-from-kube-apiserver-operator
:$ cat << EOF| oc create -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-kube-apiserver-operator
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-kube-apiserver-operator
podSelector:
matchLabels:
app: kube-apiserver-operator
policyTypes:
- Ingress
EOF
For more details, see New kube-apiserver-operator webhook controller validating health of webhook.
Optional: To confirm that the network policies exist in your current project, enter the following command:
$ oc describe networkpolicy
Example output
``` Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels:
Annotations: Spec: PodSelector: (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports)
From:
NamespaceSelector: network.openshift.io/policy-group: ingress
Not affecting egress traffic Policy Types: Ingress
Name: allow-from-openshift-monitoring
Namespace: example1
Created on: 2020-06-09 00:29:57 -0400 EDT
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From:
NamespaceSelector: network.openshift.io/policy-group: monitoring
Not affecting egress traffic
Policy Types: Ingress
```
Creating default network policies for a new project
As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy
objects when you create a new project.
Modifying the template for new projects
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Procedure
Log in as a user with
cluster-admin
privileges.Generate the default project template:
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
Use a text editor to modify the generated
template.yaml
file by adding objects or modifying existing objects.The project template must be created in the
openshift-config
namespace. Load your modified template:$ oc create -f template.yaml -n openshift-config
Edit the project configuration resource using the web console or CLI.
Using the web console:
Navigate to the Administration → Cluster Settings page.
Click Configuration to view all configuration resources.
Find the entry for Project and click Edit YAML.
Using the CLI:
Edit the
project.config.openshift.io/cluster
resource:$ oc edit project.config.openshift.io/cluster
Update the
spec
section to include theprojectRequestTemplate
andname
parameters, and set the name of your uploaded project template. The default name isproject-request
.Project configuration resource with custom project template
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...
spec:
projectRequestTemplate:
name: <template_name>
After you save your changes, create a new project to verify that your changes were successfully applied.
Adding network policies to the new project template
As a cluster administrator, you can add network policies to the default template for new projects. OKD will automatically create all the NetworkPolicy
objects specified in the template in the project.
Prerequisites
Your cluster uses a default CNI network plugin that supports
NetworkPolicy
objects, such as the OpenShift SDN network plugin withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN.You installed the OpenShift CLI (
oc
).You must log in to the cluster with a user with
cluster-admin
privileges.You must have created a custom default project template for new projects.
Procedure
Edit the default template for a new project by running the following command:
$ oc edit template <project_template> -n openshift-config
Replace
<project_template>
with the name of the default template that you configured for your cluster. The default template name isproject-request
.In the template, add each
NetworkPolicy
object as an element to theobjects
parameter. Theobjects
parameter accepts a collection of one or more objects.In the following example, the
objects
parameter collection includes severalNetworkPolicy
objects.objects:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-kube-apiserver-operator
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-kube-apiserver-operator
podSelector:
matchLabels:
app: kube-apiserver-operator
policyTypes:
- Ingress
...
Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:
Create a new project:
$ oc new-project <project> (1)
1 Replace <project>
with the name for the project you are creating.Confirm that the network policy objects in the new project template exist in the new project:
$ oc get networkpolicy
NAME POD-SELECTOR AGE
allow-from-openshift-ingress <none> 7s
allow-from-same-namespace <none> 7s
Optimizing routing
The OKD HAProxy router can be scaled or configured to optimize performance.
Baseline Ingress Controller (router) performance
The OKD Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses.
When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular:
HTTP keep-alive/close mode
Route type
TLS session resumption client support
Number of concurrent connections per target route
Number of target routes
Back end server page size
Underlying infrastructure (network/SDN solution, CPU, and so on)
While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second.
In HTTP keep-alive mode scenarios:
Encryption | LoadBalancerService | HostNetwork |
---|---|---|
none | 21515 | 29622 |
edge | 16743 | 22913 |
passthrough | 36786 | 53295 |
re-encrypt | 21583 | 25198 |
In HTTP close (no keep-alive) scenarios:
Encryption | LoadBalancerService | HostNetwork |
---|---|---|
none | 5719 | 8273 |
edge | 2729 | 4069 |
passthrough | 4121 | 5344 |
re-encrypt | 2320 | 2941 |
The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount
field set to 4
. Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB.
When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router:
Number of applications | Application type |
---|---|
5-10 | static file/web server or caching proxy |
100-1000 | applications generating dynamic content |
In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content.
Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier.
Configuring Ingress Controller liveness, readiness, and startup probes
Cluster administrators can configure the timeout values for the kubelet’s liveness, readiness, and startup probes for router deployments that are managed by the OKD Ingress Controller (router). The liveness and readiness probes of the router use the default timeout value of 1 second, which is too brief when networking or runtime performance is severely degraded. Probe timeouts can cause unwanted router restarts that interrupt application connections. The ability to set larger timeout values can reduce the risk of unnecessary and unwanted restarts.
You can update the timeoutSeconds
value on the livenessProbe
, readinessProbe
, and startupProbe
parameters of the router container.
Parameter | Description |
---|---|
| The |
| The |
| The |
The timeout configuration option is an advanced tuning technique that can be used to work around issues. However, these issues should eventually be diagnosed and possibly a support case or Jira issue opened for any issues that causes probes to time out. |
The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes:
$ oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}'
Verification
$ oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness:
Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3
Configuring HAProxy reload interval
When you update a route or an endpoint associated with a route, OKD router updates the configuration for HAProxy. Then, HAProxy reloads the updated configuration for those changes to take effect. When HAProxy reloads, it generates a new process that handles new connections using the updated configuration.
HAProxy keeps the old process running to handle existing connections until those connections are all closed. When old processes have long-lived connections, these processes can accumulate and consume resources.
The default minimum HAProxy reload interval is five seconds. You can configure an Ingress Controller using its spec.tuningOptions.reloadInterval
field to set a longer minimum reload interval.
Setting a large value for the minimum HAProxy reload interval can cause latency in observing updates to routes and their endpoints. To lessen the risk, avoid setting a value larger than the tolerable latency for updates. |
Procedure
Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command:
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}'
Postinstallation OpenStack network configuration
You can configure some aspects of an OKD on OpenStack cluster after installation.
Configuring application access with floating IP addresses
After you install OKD, configure OpenStack to allow application network traffic.
You do not need to perform this procedure if you provided values for |
Prerequisites
OKD cluster must be installed
Floating IP addresses are enabled as described in the OKD on OpenStack installation documentation.
Procedure
After you install the OKD cluster, attach a floating IP address to the ingress port:
Show the port:
$ openstack port show <cluster_name>-<cluster_ID>-ingress-port
Attach the port to the IP address:
$ openstack floating ip set --port <ingress_port_ID> <apps_FIP>
Add a wildcard
A
record for*apps.
to your DNS file:*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>
If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to
|
Kuryr ports pools
A Kuryr ports pool maintains a number of ports on standby for pod creation.
Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.
The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OKD cluster nodes.
Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.
Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml
manifest file to configure ports pool behavior:
The
enablePortPoolsPrepopulation
parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value isfalse
.The
poolMinPorts
parameter is the minimum number of free ports that are kept in the pool. The default value is1
.The
poolMaxPorts
parameter is the maximum number of free ports that are kept in the pool. A value of0
disables that upper bound. This is the default setting.If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.
The
poolBatchPorts
parameter defines the maximum number of Neutron ports that can be created at once. The default value is3
.
Adjusting Kuryr ports pool settings in active deployments on OpenStack
You can use a custom resource (CR) to configure how Kuryr manages OpenStack Neutron ports to control the speed and efficiency of pod creation on a deployed cluster.
Procedure
From a command line, open the Cluster Network Operator (CNO) CR for editing:
$ oc edit networks.operator.openshift.io cluster
Edit the settings to meet your requirements. The following file is provided as an example:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
serviceNetwork:
- 172.30.0.0/16
defaultNetwork:
type: Kuryr
kuryrConfig:
enablePortPoolsPrepopulation: false (1)
poolMinPorts: 1 (2)
poolBatchPorts: 3 (3)
poolMaxPorts: 5 (4)
1 Set enablePortPoolsPrepopulation
totrue
to make Kuryr create Neutron ports when the first pod that is configured to use the dedicated network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value isfalse
.2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts
. The default value is1
.3 poolBatchPorts
controls the number of new ports that are created if the number of free ports is lower than the value ofpoolMinPorts
. The default value is3
.4 If the number of free ports in a pool is higher than the value of poolMaxPorts
, Kuryr deletes them until the number matches that value. Setting the value to0
disables this upper bound, preventing pools from shrinking. The default value is0
.Save your changes and quit the text editor to commit your changes.
Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed. |
Enabling OVS hardware offloading
For clusters that run on OpenStack, you can enable Open vSwitch (OVS) hardware offloading.
OVS is a multi-layer virtual switch that enables large-scale, multi-server network virtualization.
Prerequisites
You installed a cluster on OpenStack that is configured for single-root input/output virtualization (SR-IOV).
You installed the SR-IOV Network Operator on your cluster.
You created two
hw-offload
type virtual function (VF) interfaces on your cluster.
Procedure
Create an
SriovNetworkNodePolicy
policy for the twohw-offload
type VF interfaces that are on your cluster:The first virtual function interface
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy (1)
metadata:
name: "hwoffload9"
namespace: openshift-sriov-network-operator
spec:
deviceType: netdevice
isRdma: true
nicSelector:
pfNames: (2)
- ens6
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: 'true'
numVfs: 1
priority: 99
resourceName: "hwoffload9"
1 Insert the SriovNetworkNodePolicy
value here.2 Both interfaces must include physical function (PF) names. The second virtual function interface
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy (1)
metadata:
name: "hwoffload10"
namespace: openshift-sriov-network-operator
spec:
deviceType: netdevice
isRdma: true
nicSelector:
pfNames: (2)
- ens5
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: 'true'
numVfs: 1
priority: 99
resourceName: "hwoffload10"
1 Insert the SriovNetworkNodePolicy
value here.2 Both interfaces must include physical function (PF) names. Create
NetworkAttachmentDefinition
resources for the two interfaces:A
NetworkAttachmentDefinition
resource for the first interfaceapiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9
name: hwoffload9
namespace: default
spec:
config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6"
}'
A
NetworkAttachmentDefinition
resource for the second interfaceapiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
annotations:
k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10
name: hwoffload10
namespace: default
spec:
config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5"
}'
Use the interfaces that you created with a pod. For example:
A pod that uses the two OVS offload interfaces
apiVersion: v1
kind: Pod
metadata:
name: dpdk-testpmd
namespace: default
annotations:
irq-load-balancing.crio.io: disable
cpu-quota.crio.io: disable
k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9
k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10
spec:
restartPolicy: Never
containers:
- name: dpdk-testpmd
image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest
Attaching an OVS hardware offloading network
You can attach an Open vSwitch (OVS) hardware offloading network to your cluster.
Prerequisites
Your cluster is installed and running.
You provisioned an OVS hardware offloading network on OpenStack to use with your cluster.
Procedure
Create a file named
network.yaml
from the following template:spec:
additionalNetworks:
- name: hwoffload1
namespace: cnf
rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' (1)
type: Raw
where:
pciBusId
Specifies the device that is connected to the offloading network. If you do not have it, you can find this value by running the following command:
$ oc describe SriovNetworkNodeState -n openshift-sriov-network-operator
From a command line, enter the following command to patch your cluster with the file:
$ oc apply -f network.yaml
Enabling IPv6 connectivity to pods on OpenStack
To enable IPv6 connectivity between pods that have additional networks that are on different nodes, disable port security for the IPv6 port of the server. Disabling port security obviates the need to create allowed address pairs for each IPv6 address that is assigned to pods and enables traffic on the security group.
Only the following IPv6 additional network configurations are supported:
|
Procedure
On a command line, enter the following command:
$ openstack port set --no-security-group --disable-port-security <compute_ipv6_port>
This command removes security groups from the port and disables port security. Traffic restrictions are removed entirely from the port.
where:
<compute_ipv6_port>
Specifies the IPv6 port of the compute server.
Adding IPv6 connectivity to pods on OpenStack
After you enable IPv6 connectivity in pods, add connectivity to them by using a Container Network Interface (CNI) configuration.
Procedure
To edit the Cluster Network Operator (CNO), enter the following command:
$ oc edit networks.operator.openshift.io cluster
Specify your CNI configuration under the
spec
field. For example, the following configuration uses a SLAAC address mode with MACVLAN:...
spec:
additionalNetworks:
- name: ipv6
namespace: ipv6 (1)
rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "ipv6", "type": "macvlan", "master": "ens4"}' (2)
type: Raw
1 Be sure to create pods in the same namespace. 2 The interface in the network attachment “master”
field can differ from“ens4”
when more networks are configured or when a different kernel driver is used.If you are using stateful address mode, include the IP Address Management (IPAM) in the CNI configuration.
DHCPv6 is not supported by Multus.
Save your changes and quit the text editor to commit your changes.
Verification
On a command line, enter the following command:
$ oc get network-attachment-definitions -A
Example output
NAMESPACE NAME AGE
ipv6 ipv6 21h
You can now create pods that have secondary IPv6 connections.
Additional resources
Create pods that have IPv6 connectivity on OpenStack
After you enable IPv6 connectivty for pods and add it to them, create pods that have secondary IPv6 connections.
Procedure
Define pods that use your IPv6 namespace and the annotation
k8s.v1.cni.cncf.io/networks: <additional_network_name>
, where<additional_network_name
is the name of the additional network. For example, as part of aDeployment
object:apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
namespace: ipv6
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- hello-openshift
replicas: 2
selector:
matchLabels:
app: hello-openshift
template:
metadata:
labels:
app: hello-openshift
annotations:
k8s.v1.cni.cncf.io/networks: ipv6
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: hello-openshift
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
image: quay.io/openshift/origin-hello-openshift
ports:
- containerPort: 8080
Create the pod. For example, on a command line, enter the following command:
$ oc create -f <ipv6_enabled_resource>
where:
<ipv6_enabled_resource>
Specifies the file that contains your resource definition.