- Installing a cluster on AWS with network customizations
- Prerequisites
- Generating a key pair for cluster node SSH access
- Obtaining the installation program
- Network configuration phases
- Creating the installation configuration file
- Cluster Network Operator configuration
- Specifying advanced network configuration
- Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
- Configuring hybrid networking with OVN-Kubernetes
- Deploying the cluster
- Installing the OpenShift CLI by downloading the binary
- Logging in to the cluster by using the CLI
- Logging in to the cluster by using the web console
- Next steps
Installing a cluster on AWS with network customizations
In OKD version 4.12, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
Prerequisites
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-system
namespace, you can manually create and maintain IAM credentials.
Generating a key pair for cluster node SSH access
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 Specify the path and file name, such as ~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64
architecture, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.On some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> (1)
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OKD, provide the SSH public key to the installation program.
Obtaining the installation program
Before you install OKD, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use
{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.If you do not use the pull secret from the Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
Network configuration phases
There are two phases prior to OKD installation where you can customize the network configuration.
Phase 1
You can customize the following network-related fields in the install-config.yaml
file before you create the manifest files:
networking.networkType
networking.clusterNetwork
networking.serviceNetwork
networking.machineNetwork
For more information on these fields, refer to Installation configuration parameters.
Set the
networking.machineNetwork
to match the CIDR that the preferred NIC resides in.The CIDR range
172.17.0.0/16
is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.
Phase 2
After creating the manifest files by running openshift-install create manifests
, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml
file during phase 2. However, you can further customize the network plugin during phase 2.
Creating the installation configuration file
You can customize the OKD cluster you install on Amazon Web Services (AWS).
Prerequisites
Obtain the OKD installation program and the pull secret for your cluster.
Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 For <installation_directory>
, specify the directory name to store the files that the installation program creates.When specifying the directory:
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory.Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select AWS as the platform to target.
If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
Select the AWS region to deploy the cluster to.
Select the base domain for the Route 53 service that you configured for your cluster.
Enter a descriptive name for your cluster.
Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.
Modify the
install-config.yaml
file. You can find more information about the available parameters in the “Installation configuration parameters” section.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.The
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
Installation configuration parameters
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
Required configuration parameters
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The API version for the | String |
| The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the | A fully-qualified domain or subdomain name, such as |
| Kubernetes resource | Object |
| The name of the cluster. DNS records for the cluster are all subdomains of | String of lowercase letters, hyphens ( |
| The configuration for the specific platform upon which to perform the installation: | Object |
Network configuration parameters
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. |
Parameter | Description | Values | ||
---|---|---|---|---|
| The configuration for the cluster network. | Object
| ||
| The Red Hat OpenShift Networking network plugin to install. | Either | ||
| The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use An IPv4 network. | An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between | ||
| The subnet prefix length to assign to each individual node. For example, if | A subnet prefix. The default value is | ||
| The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
| ||
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use | An IP network block in CIDR notation. For example,
|
Optional configuration parameters
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
| A PEM-encoded X.509 certificate bundle that is added to the nodes’ trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String | ||
| Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components. For more information, see the “Cluster capabilities” page in Installing. | String array | ||
| Selects an initial set of optional capabilities to enable. Valid values are | String | ||
| Extends the set of optional capabilities beyond what you specify in | String array | ||
| The configuration for the machines that comprise the compute nodes. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of compute machines, which are also known as worker machines, to provision. | A positive integer greater than or equal to | ||
| Enables the cluster for a feature set. A feature set is a collection of OKD features that are not enabled by default. For more information about enabling a feature set during installation, see “Enabling features using feature gates”. | String. The name of the feature set to enable, such as | ||
| The configuration for the machines that comprise the control plane. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of control plane machines to provision. | The only supported value is | ||
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
|
| ||
| Sources and repositories for the release-image content. | Array of objects. Includes a | ||
| Required if you use | String | ||
| Specify one or more repositories that may also contain the same images. | Array of strings | ||
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
| ||
| The SSH key or keys to authenticate access your cluster machines.
| One or more keys. For example:
|
Optional AWS configuration parameters
Optional AWS configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom FCOS AMI. | Any published or custom FCOS AMI that belongs to the set AWS region. See FCOS AMIs for AWS infrastructure for available AMI IDs. | ||
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. | ||
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. | Integer, for example | ||
| The size in GiB of the root volume. | Integer, for example | ||
| The type of the root volume. | Valid AWS EBS volume type, such as | ||
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. | ||
| The EC2 instance type for the compute machines. | Valid AWS instance type, such as | ||
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. | A list of valid AWS availability zones, such as | ||
| The AWS region that the installation program creates compute resources in. | Any valid AWS region, such as
| ||
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom FCOS AMI. | Any published or custom FCOS AMI that belongs to the set AWS region. See FCOS AMIs for AWS infrastructure for available AMI IDs. | ||
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. | ||
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. | ||
| The EC2 instance type for the control plane machines. | Valid AWS instance type, such as | ||
| The availability zones where the installation program creates machines for the control plane machine pool. | A list of valid AWS availability zones, such as | ||
| The AWS region that the installation program creates control plane resources in. | Valid AWS region, such as | ||
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom FCOS AMI. | Any published or custom FCOS AMI that belongs to the set AWS region. See FCOS AMIs for AWS infrastructure for available AMI IDs. | ||
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. | String, for example | ||
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. | ||
| The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. | ||
| A map of keys and values that the installation program adds as tags to all resources that it creates. | Any valid YAML map, such as key value pairs in the
| ||
| A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. | Boolean values, for example | ||
| If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
---|---|---|---|---|---|
Bootstrap | FCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | FCOS | 4 | 16 GB | 100 GB | 300 |
Compute | FCOS | 2 | 8 GB | 100 GB | 300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OKD and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use Fedora compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of Fedora 7 compute machines is deprecated and has been removed in OKD 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.
Tested instance types for AWS
The following Amazon Web Services (AWS) instance types have been tested with OKD.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in “Minimum resource requirements for cluster installation”. |
Machine types based on x86_64 architecture
c4.*
c5.*
c5a.*
i3.*
m4.*
m5.*
m5a.*
m6i.*
r4.*
r5.*
r5a.*
r6i.*
t3.*
t3a.*
Tested instance types for AWS ARM
The following Amazon Web Services (AWS) ARM instance types have been tested with OKD.
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in “Minimum resource requirements for cluster installation”. |
Machine types based on arm64 architecture
c6g.*
m6g.*
Sample customized install-config.yaml file for AWS
You can customize the installation configuration file (install-config.yaml
) to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
credentialsMode: Mint (2)
controlPlane: (3) (4)
hyperthreading: Enabled (5)
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1 (6)
metadataService:
authentication: Optional (7)
type: m6i.xlarge
replicas: 3
compute: (3)
- hyperthreading: Enabled (5)
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1 (6)
metadataService:
authentication: Optional (7)
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster (1)
networking: (3)
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (8)
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2 (1)
propagateUserTags: true (3)
userTags:
adminContact: jdoe
costCenter: 7536
amiID: ami-96c6f8f7 (9)
serviceEndpoints: (10)
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
sshKey: ssh-ed25519 AAAA... (11)
pullSecret: '{"auths": ...}' (1)
1 | Required. The installation program prompts you for this value. | ||
2 | Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. | ||
3 | If you do not provide these parameters and values, the installation program provides the default value. | ||
4 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. | ||
5 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
| ||
6 | To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . | ||
7 | Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed.
| ||
8 | The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . | ||
9 | The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. | ||
10 | The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. | ||
11 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
You have an existing
install-config.yaml
file.You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.The
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com,s3.<region>.amazonaws.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
.2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with .
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations. If you have added the AmazonEC2
,Elastic Load Balancing
, andS3
VPC endpoints to your VPC, you must add these endpoints to thenoProxy
field.4 If provided, the installation program generates a config map that is named user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.5 Optional: The policy to determine the configuration of the Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.The installation program does not support the proxy
readinessEndpoints
field.Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the |
Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group and these fields cannot be changed:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
| The name of the CNO object. This name is always |
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:
You can customize this field only in the |
|
| Configures the network plugin for the cluster network. |
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
| Either
| ||
|
| This object is only valid for the OpenShift SDN network plugin. | ||
|
| This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OpenShift SDN network plugin
The following table describes the configuration fields for the OpenShift SDN network plugin:
Field | Type | Description |
---|---|---|
|
| Configures the network isolation mode for OpenShift SDN. The default value is The values |
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
| The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description | ||
---|---|---|---|---|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to | ||
|
| The port to use for all Geneve packets. The default value is | ||
|
| Specify an empty object to enable IPsec encryption. | ||
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. | ||
|
| Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.
| ||
| If your existing network infrastructure overlaps with the For example, if the This field cannot be changed after installation. | The default value is | ||
| If your existing network infrastructure overlaps with the This field cannot be changed after installation. | The default value is |
Field | Type | Description |
---|---|---|
| integer | The maximum number of messages to generate every second per node. The default value is |
| integer | The maximum size for the audit log in bytes. The default value is |
| string | One of the following additional audit log targets:
|
| string | The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
| Set this field to This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to |
Example OVN-Kubernetes configuration with IPSec enabled
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
| The refresh period for
| ||
|
| The minimum duration before refreshing
|
Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OKD manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. |
Prerequisites
- You have created the
install-config.yaml
file and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 <installation_directory>
specifies the name of the directory that contains theinstall-config.yaml
file for your cluster.Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.yml
file, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
openshiftSDNConfig:
vxlanPort: 4800
Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
ipsecConfig: {}
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program consumes themanifests/
directory when you create the Ignition config files.
For more information on using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer. |
Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster.
Prerequisites
- Create the
install-config.yaml
file and complete any modifications to it.
Procedure
Create an Ingress Controller backed by an AWS NLB on a new cluster.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 For <installation_directory>
, specify the name of the directory that contains theinstall-config.yaml
file for your cluster.Create a file that is named
cluster-ingress-default-ingresscontroller.yaml
in the<installation_directory>/manifests/
directory:$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml (1)
1 For <installation_directory>
, specify the directory name that contains themanifests/
directory for your cluster.After creating the file, several network configuration files are in the
manifests/
directory, as shown:$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
Example output
cluster-ingress-default-ingresscontroller.yaml
Open the
cluster-ingress-default-ingresscontroller.yaml
file in an editor and enter a custom resource (CR) that describes the Operator configuration you want:apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
creationTimestamp: null
name: default
namespace: openshift-ingress-operator
spec:
endpointPublishingStrategy:
loadBalancer:
scope: External
providerParameters:
type: AWS
aws:
type: NLB
type: LoadBalancerService
Save the
cluster-ingress-default-ingresscontroller.yaml
file and quit the text editor.Optional: Back up the
manifests/cluster-ingress-default-ingresscontroller.yaml
file. The installation program deletes themanifests/
directory when creating the cluster.
Configuring hybrid networking with OVN-Kubernetes
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. |
Prerequisites
- You defined
OVNKubernetes
for thenetworking.networkType
parameter in theinstall-config.yaml
file. See the installation documentation for configuring OKD network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>
where:
<installation_directory>
Specifies the name of the directory that contains the
install-config.yaml
file for your cluster.Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
EOF
where:
<installation_directory>
Specifies the directory name that contains the
manifests/
directory for your cluster.Open the
cluster-network-03-config.yml
file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
hybridOverlayConfig:
hybridClusterNetwork: (1)
- cidr: 10.132.0.0/14
hostPrefix: 23
hybridOverlayVXLANPort: 9898 (2)
1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork
CIDR cannot overlap with theclusterNetwork
CIDR.2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789
port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPort
value because this Windows server version does not support selecting a custom VXLAN port.Save the
cluster-network-03-config.yml
file and quit the text editor.Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads. |
Deploying the cluster
You can install OKD on a compatible cloud platform.
You can run the |
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OKD installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 For <installation_directory>
, specify the location of your customized./install-config.yaml
file.2 To view different installation details, specify warn
,debug
, orerror
instead ofinfo
.If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
Optional: Remove or disable the
AdministratorAccess
policy from the IAM account that you used to install the cluster.The elevated permissions provided by the
AdministratorAccess
policy are required only during installation.
Verification
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user.Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
|
Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OKD from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
Navigate to link:https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
Navigate to link:https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.zip
.Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
Navigate to link:https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.
Prerequisites
You deployed an OKD cluster.
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 For <installation_directory>
, specify the path to the directory that you stored the installation files in.Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Logging in to the cluster by using the web console
The kubeadmin
user exists by default after an OKD installation. You can log in to your cluster as the kubeadmin
user by using the OKD web console.
Prerequisites
You have access to the installation host.
You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadmin
user from thekubeadmin-password
file on the installation host:$ cat <installation_directory>/auth/kubeadmin-password
Alternatively, you can obtain the
kubeadmin
password from the<installation_directory>/.openshift_install.log
log file on the installation host.List the OKD web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'
Alternatively, you can obtain the OKD route from the
<installation_directory>/.openshift_install.log
log file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadmin
user.
Additional resources
- See Accessing the web console for more details about accessing and understanding the OKD web console.
Additional resources
- See About remote health monitoring for more information about the Telemetry service.
Next steps
If necessary, you can opt out of remote health reporting.
If necessary, you can remove cloud provider credentials.