Encrypting traffic between nodes with IPsec
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]
Overview
IPsec protects traffic in an OKD cluster by encrypting the communication between all master and node hosts that communicate using the Internet Protocol (IP).
This topic shows how to secure communication of an entire IP subnet from which the OKD hosts receive their IP addresses, including all cluster management and pod data traffic.
Because OKD management traffic uses HTTPS, enabling IPsec encrypts management traffic a second time. |
This procedure must be repeated on each master host, then node host, in your cluster. Hosts that do not have IPsec enabled will not be able to communicate with a host that does. |
Encrypting hosts
Prerequisites
Ensure that libreswan 3.15 or later is installed on cluster hosts. If opportunistic group functionality is required, then libreswan version 3.19 or later is required.
See the Configure the pod network on nodes section for information on how to configure the MTU to allow space for the IPsec header. This topic describes an IPsec configuration that requires 62 bytes. If the cluster is operating on an Ethernet network with an MTU of 1500 then the SDN MTU must be 1388, to allow for the overhead of IPsec and the SDN encapsulation.
After modifying the MTU in the OKD configuration, the SDN must be made aware of the change by removing the SDN interface and restarting the SDN and OVS pods on all nodes.
Remove the SDN interface:
# oc exec <ovs_pod_name> -- ovs-vsctl del-br br0
Restart the SDN and OVS pods:
# oc delete pod -n openshift-sdn -l=app=ovs
# oc delete pod -n openshift-sdn -l=app=sdn
Configuring certificates for IPsec
By default, OKD secures cluster management communication with mutually authenticated HTTPS communication. This means that both the client (for example, an OKD node) and the server (for example, an OKD api-server) send each other their certificates, which are checked against a known certificate authority (CA). These certificates are generated at cluster set up time and typically live on each host. These certificates can also be used to secure pod communications with IPsec.
This procedure assumes you have the following on each host:
Cluster CA file
Host client certificate file
Host private key file
Determine what the certificate’s nickname will be after it has been imported into the libreswan certificate database. The nickname is taken directly from the certificate’s subject’s Common Name (CN):
# openssl x509 \
-in /path/to/client-certificate -subject -noout | \
sed -n 's/.*CN=\(.*\)/\1/p'
Use openssl to combine the client certificate, CA certificate, and private key files into a PKCS#12 file, which is a common file format for multiple certificates and keys:
# openssl pkcs12 -export \
-in /path/to/client-certificate \
-inkey /path/to/private-key \
-certfile /path/to/certificate-authority \
-passout pass: \
-out certs.p12
Import the PKCS#12 file into the libreswan certificate database. The
-W
option is left empty because no password is assigned to the PKCS#12 file, as it is only temporary.# ipsec initnss
# pk12util -i certs.p12 -d sql:/etc/ipsec.d -W ""
# rm certs.p12
Creating the libreswan IPsec policy
After ensuring that the necessary certificates are imported into the libreswan certificate database, create a policy that uses them to secure communication between hosts in your cluster.
If you are using libreswan 3.19 or later, then opportunistic group configuration is recommended. Otherwise, explicit connections are required.
Configuring the opportunistic group
The following configuration creates two libreswan connections. The first encrypts traffic using the OKD certificates, while the second creates exceptions to the encryption for cluster-external traffic.
Place the following into the /etc/ipsec.d/openshift-cluster.conf file:
conn private
left=%defaultroute
leftid=%fromcert
# our certificate
leftcert="NSS Certificate DB:<cert_nickname>" (1)
right=%opportunisticgroup
rightid=%fromcert
# their certificate transmitted via IKE
rightca=%same
ikev2=insist
authby=rsasig
failureshunt=drop
negotiationshunt=hold
auto=ondemand
conn clear
left=%defaultroute
right=%group
authby=never
type=passthrough
auto=route
priority=100
1 Replace <cert_nickname> with the certificate nickname from step one. Tell libreswan which IP subnets and hosts to apply each policy using policy files in /etc/ipsec.d/policies/, where each configured connection has a corresponding policy file. So, in the example above, the two connections,
private
andclear
, each have a file in /etc/ipsec.d/policies/./etc/ipsec.d/policies/private must contain the IP subnet of your cluster, which your hosts receive IP addresses from. By default, this causes all communication between hosts in the cluster subnet to be encrypted if the remote host’s client certificate authenticates against the local host’s Certificate Authority certificate. If the remote host’s certificate does not authenticate, all traffic between the two hosts will be blocked.
For example, if all hosts are configured to use addresses in the
172.16.0.0/16
address space, yourprivate
policy file would contain172.16.0.0/16
. Any number of additional subnets to encrypt may be added to this file, which results in all traffic to those subnets using IPsec as well.Unencrypt the communication between all hosts and the subnet gateway to ensure that traffic can enter and exit the cluster. Add the gateway to the /etc/ipsec.d/policies/clear file:
172.16.0.1/32
Additional hosts and subnets may be added to this file, which will result in all traffic to these hosts and subnets being unencrypted.
Configuring the explicit connection
In this configuration, each IPsec node configuration must explicitly list the configuration of every other node in the cluster. Using a configuration management tool such as Ansible to generate this file on each host is recommended.
Do not manually edit the |
This configuration also requires the full certificate subject of each node to be placed into the configuration for every other node.
Use openssl to read this subject from the node’s certificate:
# openssl x509 \
-in /path/to/client-certificate -text | \
grep "Subject:" | \
sed 's/[[:blank:]]*Subject: //'
Place the following lines into the /etc/ipsec.d/openshift-cluster.conf file on each node for every other node in the cluster:
conn <other_node_hostname>
left=<this_node_ip> (1)
leftid="CN=<this_node_cert_nickname>" (2)
leftrsasigkey=%cert
leftcert=<this_node_cert_nickname> (2)
right=<other_node_ip> (3)
rightid="<other_node_cert_full_subject>" (4)
rightrsasigkey=%cert
auto=start
keyingtries=%forever
1 Replace <this_node_ip> with the cluster IP address of this node. 2 Replace <this_node_cert_nickname> with the node certificate nickname from step one. 3 Replace <other_node_ip> with the cluster IP address of the other node. 4 Replace <other_node_cert_full_subject> with the other node’s certificate subject from just above. For example: “O=system:nodes,CN=openshift-node-45.example.com”. Place the following in the /etc/ipsec.d/openshift-cluster.secrets file on each node:
: RSA "<this_node_cert_nickname>" (1)
1 Replace <this_node_cert_nickname> with the node certificate nickname from step one.
Configuring the IPsec firewall
All nodes within the cluster need to allow IPsec related network traffic. This includes IP protocol numbers 50 and 51 as well as UDP port 500.
For example, if the cluster nodes communicate over interface eth0
:
-A OS_FIREWALL_ALLOW -i eth0 -p 50 -j ACCEPT
-A OS_FIREWALL_ALLOW -i eth0 -p 51 -j ACCEPT
-A OS_FIREWALL_ALLOW -i eth0 -p udp --dport 500 -j ACCEPT
IPsec also uses UDP port 4500 for NAT traversal, though this should not apply to normal cluster deployments. |
Starting and enabling IPsec
Start the ipsec service to load the new configuration and policies, and begin encrypting:
# systemctl start ipsec
Enable the ipsec service to start on boot:
# systemctl enable ipsec
Optimizing IPsec
See the Scaling and Performance Guide for performance suggestions when encrypting with IPsec.
Troubleshooting
When authentication cannot be completed between two hosts, you will not be able to ping between them, because all IP traffic will be rejected. If the clear
policy is not configured correctly, you will also not be able to SSH to the host from another host in the cluster.
You can use the ipsec status
command to check that the clear
and private
policies have been loaded.