- Installing a cluster on VMC with network customizations
- Setting up VMC for vSphere
- vSphere prerequisites
- VMware vSphere infrastructure requirements
- vCenter requirements
- Generating an SSH private key and adding it to the agent
- Obtaining the installation program
- Adding vCenter root CA certificates to your system trust
- Creating the installation configuration file
- Network configuration phases
- Specifying advanced network configuration
- Cluster Network Operator configuration
- Deploying the cluster
- Installing the OpenShift CLI by downloading the binary
- Logging in to the cluster by using the CLI
- Creating registry storage
- Backing up VMware vSphere volumes
- Next steps
Installing a cluster on VMC with network customizations
In OKD version 4.7, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure with customized network configuration options by deploying it to VMware Cloud (VMC) on AWS.
Once you configure your VMC environment for OKD deployment, you use the OKD installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OKD cluster.
By customizing your OKD network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
Setting up VMC for vSphere
You can install OKD on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud.
You must configure several options in your VMC environment prior to installing OKD on VMware vSphere. Ensure your VMC environment has the following prerequisites:
Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OKD deployment.
Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records.
A DNS record for
api.<cluster_name>.<base_domain>
pointing to the allocated IP address.A DNS record for
*.apps.<cluster_name>.<base_domain>
pointing to the allocated IP address.
Configure the following firewall rules:
An ANY:ANY firewall rule between the OKD compute network and the Internet. This is used by nodes and applications to download container images.
An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Fedora CoreOS (FCOS) OVA during deployment.
An HTTPS firewall rule between the OKD compute network and vCenter. This connection allows OKD to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources.
You must have the following information to deploy OKD:
The OKD cluster name, such as
vmc-prod-1
.The base DNS name, such as
companyname.com
.If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to
10.128.0.0/14
and172.30.0.0/16
, respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization.The following vCenter information:
vCenter hostname, username, and password
Datacenter name, such as
SDDC-Datacenter
Cluster name, such as
Cluster-1
Network name
Datastore name, such as
WorkloadDatastore
It is recommended to move your vSphere cluster to the VMC
Compute-ResourcePool
resource pool after your cluster installation is finished.
A Linux-based host deployed to VMC as a bastion.
The bastion host can be Fedora or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts.
Download and install the OpenShift CLI tools to the bastion host.
The
openshift-install
installation programThe OpenShift CLI (
oc
) tool
You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OKD. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OKD deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OKD cluster and between the bastion host and the VMC vSphere hosts. |
VMC Sizer tool
VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure.
To determine this, VMware provides the VMC on AWS Sizer. With this tool, you can define the resources you intend to host on VMC:
Types of workloads
Total number of virtual machines
Specification information such as:
Storage requirements
vCPUs
vRAM
Overcommit ratios
With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need.
vSphere prerequisites
Provision block registry storage. For more information on persistent storage, see Understanding persistent storage.
Review details about the OKD installation and update processes.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
Be sure to also review this site list if you are configuring a proxy.
VMware vSphere infrastructure requirements
You must install the OKD cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use.
Component | Minimum supported versions | Description |
---|---|---|
Hypervisor | vSphere 6.5 and later with HW version 13 | This version is the minimum version that Fedora CoreOS (FCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list. |
Networking (NSX-T) | vSphere 6.5U3 or vSphere 6.7U2 and later | vSphere 6.5U3 or vSphere 6.7U2+ are required for OKD. VMware’s NSX Container Plug-in (NCP) is certified with OKD 4.6 and NSX-T 3.x+. |
Storage with in-tree drivers | vSphere 6.5 and later | This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OKD. |
If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OKD.
You must ensure that the time on your ESXi hosts is synchronized before you install OKD. See Edit Time Configuration for a Host in the VMware documentation. |
Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539. |
vCenter requirements
Before you install an OKD cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment.
Required vCenter account privileges
To install an OKD cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions.
If you cannot use an account with global adminstrative privileges, you must create roles to grant the privileges necessary for OKD cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OKD cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges.
An additional role is required if the installation program is to create a vSphere virtual machine folder.
Roles and privileges required for installation
vSphere object for role | When required | Required privileges |
---|---|---|
vSphere vCenter | Always |
|
vSphere vCenter Cluster | Always |
|
vSphere Datastore | Always |
|
vSphere Port Group | Always |
|
Virtual Machine Folder | Always |
|
vSphere vCenter Datacenter | If the installation program creates the virtual machine folder |
|
Additionally, the user requires some ReadOnly
permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder.
Required permissions and propagation settings
vSphere object | Folder type | Propagate to children | Permissions required |
---|---|---|---|
vSphere vCenter | Always | False | Listed required privileges |
vSphere vCenter Datacenter | Existing folder | False |
|
Installation program creates the folder | True | Listed required privileges | |
vSphere vCenter Cluster | Always | True | Listed required privileges |
vSphere vCenter Datastore | Always | False | Listed required privileges |
vSphere Switch | Always | False |
|
vSphere Port Group | Always | False | Listed required privileges |
vSphere vCenter Virtual Machine Folder | Existing folder | True | Listed required privileges |
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
Using OKD with vMotion
OKD generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. |
If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes invalid references within OKD persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss.
Similarly, OKD does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs.
Cluster resources
When you deploy an OKD cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance.
A standard OKD installation creates the following vCenter resources:
1 Folder
1 Tag category
1 Tag
Virtual machines:
1 template
1 temporary bootstrap node
3 control plane nodes
3 compute machines
Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster.
If you deploy more compute machines, the OKD cluster will use more storage.
Cluster limits
Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks.
Networking requirements
You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. Additionally, you must create the following networking resources before you install the OKD cluster:
It is recommended that each OKD node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. |
Required IP Addresses
An installer-provisioned vSphere installation requires two static IP addresses:
The API address is used to access the cluster API.
The Ingress address is used for cluster ingress traffic.
You must provide these IP addresses to the installation program when you install the OKD cluster.
DNS records
You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OKD cluster. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
API VIP |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
Ingress VIP |
| A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
Generating an SSH private key and adding it to the agent
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent
and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.
In a production environment, you require disaster recovery and debugging. |
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' \
-f <path>/<file_name> (1)
1 Specify the path and file name, such as ~/.ssh/id_rsa
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.Running this command generates an SSH key that does not require a password in the location that you specified.
If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64
architecture, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> (1)
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
- When you install OKD, provide the SSH public key to the installation program.
Obtaining the installation program
Before you install OKD, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.Using a pull secret from the Red Hat OpenShift Cluster Manager site is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use
{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.If you do not use the pull secret from the Red Hat OpenShift Cluster Manager site:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
Adding vCenter root CA certificates to your system trust
Because the installation program requires access to your vCenter’s API, you must add your vCenter’s trusted root CA certificates to your system trust before you install an OKD cluster.
Procedure
From the vCenter home page, download the vCenter’s root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The
<vCenter>/certs/download.zip
file downloads.Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure:
certs
├── lin
│ ├── 108f4d17.0
│ ├── 108f4d17.r1
│ ├── 7e757f6a.0
│ ├── 8e4f8471.0
│ └── 8e4f8471.r0
├── mac
│ ├── 108f4d17.0
│ ├── 108f4d17.r1
│ ├── 7e757f6a.0
│ ├── 8e4f8471.0
│ └── 8e4f8471.r0
└── win
├── 108f4d17.0.crt
├── 108f4d17.r1.crl
├── 7e757f6a.0.crt
├── 8e4f8471.0.crt
└── 8e4f8471.r0.crl
3 directories, 15 files
Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command:
# cp certs/lin/* /etc/pki/ca-trust/source/anchors
Update your system trust. For example, on a Fedora operating system, run the following command:
# update-ca-trust extract
Creating the installation configuration file
You can customize the OKD cluster you install on VMware vSphere.
Prerequisites
Obtain the OKD installation program and the pull secret for your cluster.
Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir=<installation_directory> (1)
1 For <installation_directory>
, specify the directory name to store the files that the installation program creates.Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select vsphere as the platform to target.
Specify the name of your vCenter instance.
Specify the user name and password for the vCenter account that has the required permissions to create the cluster.
The installation program connects to your vCenter instance.
Select the datacenter in your vCenter instance to connect to.
Select the default vCenter datastore to use.
Select the vCenter cluster to install the OKD cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
Enter the virtual IP address that you configured for control plane API access.
Enter the virtual IP address that you configured for cluster ingress.
Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured.
Paste the pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site. This field is optional.
Modify the
install-config.yaml
file. You can find more information about the available parameters in the “Installation configuration parameters” section.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.The
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
Installation configuration parameters
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
The |
Required configuration parameters
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The API version for the | String |
| The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the | A fully-qualified domain or subdomain name, such as |
| Kubernetes resource | Object |
| The name of the cluster. DNS records for the cluster are all subdomains of | String of lowercase letters, hyphens ( |
| The configuration for the specific platform upon which to perform the installation: | Object |
Network configuration parameters
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Parameter | Description | Values | ||
---|---|---|---|---|
| The configuration for the cluster network. | Object
| ||
| The cluster network provider Container Network Interface (CNI) plug-in to install. | Either | ||
| The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use An IPv4 network. | An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between | ||
| The subnet prefix length to assign to each individual node. For example, if | A subnet prefix. The default value is | ||
| The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
| ||
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use | An IP network block in CIDR notation. For example,
|
Optional configuration parameters
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
| A PEM-encoded X.509 certificate bundle that is added to the nodes’ trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String | ||
| The configuration for the machines that comprise the compute nodes. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of compute machines, which are also known as worker machines, to provision. | A positive integer greater than or equal to | ||
| The configuration for the machines that comprise the control plane. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of control plane machines to provision. | The only supported value is | ||
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
|
| ||
| Sources and repositories for the release-image content. | Array of objects. Includes a | ||
| Required if you use | String | ||
| Specify one or more repositories that may also contain the same images. | Array of strings | ||
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to
| ||
| The SSH key or keys to authenticate access your cluster machines.
| One or more keys. For example:
|
Additional VMware vSphere configuration parameters
Additional VMware vSphere configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The fully-qualified hostname or IP address of the vCenter server. | String |
| The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. | String |
| The password for the vCenter user name. | String |
| The name of the datacenter to use in the vCenter instance. | String |
| The name of the default datastore to use for provisioning volumes. | String |
| Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. | String, for example, |
| The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. | String |
| The vCenter cluster to install the OKD cluster in. | String |
| The virtual IP (VIP) address that you configured for control plane API access. | An IP address, for example |
| The virtual IP (VIP) address that you configured for cluster ingress. | An IP address, for example |
Optional VMware vSphere machine pool configuration parameters
Optional VMware vSphere machine pool configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The location from which the installer downloads the FCOS image. You must set this parameter to perform an installation in a restricted network. | An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, |
| The size of the disk in gigabytes. | Integer |
| The total number of virtual processor cores to assign a virtual machine. | Integer |
| The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is | Integer |
| The size of a virtual machine’s memory in megabytes. | Integer |
Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
apiVersion: v1
baseDomain: example.com (1)
compute: (2)
- hyperthreading: Enabled (3)
name: worker
replicas: 3
platform:
vsphere: (4)
cpus: 2
coresPerSocket: 2
memoryMB: 8196
osDisk:
diskSizeGB: 120
controlPlane: (2)
hyperthreading: Enabled (3)
name: master
replicas: 3
platform:
vsphere: (4)
cpus: 4
coresPerSocket: 2
memoryMB: 16384
osDisk:
diskSizeGB: 120
metadata:
name: cluster (5)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
vsphere:
vcenter: your.vcenter.server
username: username
password: password
datacenter: datacenter
defaultDatastore: datastore
folder: folder
network: VM_Network
cluster: vsphere_cluster_name (6)
apiVIP: api_vip
ingressVIP: ingress_vip
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
1 | The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. | ||
2 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OKD will support defining multiple compute pools during installation. Only one control plane pool is used. | ||
3 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
| ||
4 | Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. | ||
5 | The cluster name that you specified in your DNS records. | ||
6 | The vSphere cluster to install the OKD cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. |
Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
You have an existing
install-config.yaml
file.You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.The
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com
,elasticloadbalancing.<region>.amazonaws.com
, ands3.<region>.amazonaws.com
endpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify anhttpProxy
value.2 A proxy URL to use for creating HTTPS connections outside the cluster. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify an httpsProxy
value.3 A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with .
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines.4 If provided, the installation program generates a config map that is named user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in theProxy
object’strustedCA
field. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must provide the MITM CA certificate.The installation program does not support the proxy
readinessEndpoints
field.Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the |
Network configuration phases
When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration:
Phase 1
After entering the openshift-install create install-config
command. In the install-config.yaml
file, you can customize the following network-related fields:
networking.networkType
networking.clusterNetwork
networking.serviceNetwork
networking.machineNetwork
For more information on these fields, refer to “Installation configuration parameters”.
Set the
networking.machineNetwork
to match the CIDR that the preferred NIC resides in.
Phase 2
After entering the openshift-install create manifests
command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify.
You cannot override the values specified in phase 1 in the install-config.yaml
file during phase 2. However, you can further customize the cluster network provider during phase 2.
Specifying advanced network configuration
You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster.
Modifying the OKD manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. |
Prerequisites
- Create the
install-config.yaml
file and complete any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir=<installation_directory>
where:
<installation_directory>
Specifies the name of the directory that contains the
install-config.yaml
file for your cluster.Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
EOF
where:
<installation_directory>
Specifies the directory name that contains the
manifests/
directory for your cluster.Open the
cluster-network-03-config.yml
file in an editor and specify the advanced network configuration for your cluster, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
openshiftSDNConfig:
vxlanPort: 4800
Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
ipsecConfig: {}
Save the
cluster-network-03-config.yml
file and quit the text editor.Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group and these fields cannot be changed:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
| The name of the CNO object. This name is always |
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
This value is ready-only and specified in the |
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:
This value is ready-only and specified in the |
|
| Configures the Container Network Interface (CNI) cluster network provider for the cluster network. |
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
| Either
| ||
|
| This object is only valid for the OpenShift SDN cluster network provider. | ||
|
| This object is only valid for the OVN-Kubernetes cluster network provider. |
Configuration for the OpenShift SDN CNI cluster network provider
The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.
Field | Type | Description |
---|---|---|
|
| Configures the network isolation mode for OpenShift SDN. The default value is The values |
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
| The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
Configuration for the OVN-Kubernetes CNI cluster network provider
The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.
Field | Type | Description |
---|---|---|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
| The port to use for all Geneve packets. The default value is |
|
| Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. |
Example OVN-Kubernetes configuration
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
| The refresh period for
| ||
|
| The minimum duration before refreshing
|
Deploying the cluster
You can install OKD on a compatible cloud platform.
You can run the |
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OKD installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir=<installation_directory> \ (1)
--log-level=info (2)
1 For <installation_directory>
, specify the location of your customized./install-config.yaml
file.2 To view different installation details, specify warn
,debug
, orerror
instead ofinfo
.Use the
openshift-install
command from the bastion hosted in the VMC environment.If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
The cluster access and credential information also outputs to
<installation_directory>/.openshift_install.log
when an installation succeeds.The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OKD from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.zip
.Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.
Prerequisites
You deployed an OKD cluster.
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 For <installation_directory>
, specify the path to the directory that you stored the installation files in.Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Creating registry storage
After you install the cluster, you must create storage for the registry Operator.
Image registry removed during installation
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed
. This allows openshift-installer
to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the managementState
from Removed
to Managed
.
The Prometheus console provides an “Image Registry has been removed. |
Image registry storage configuration
The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate
rollout strategy during upgrades.
Configuring registry storage for VMware vSphere
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
Cluster administrator permissions.
A cluster on VMware vSphere.
Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.
OKD supports
ReadWriteOnce
access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas,ReadWriteMany
access is required.Must have “100Gi” capacity.
Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OKD core components. |
Procedure
To configure your registry to use storage, change the
spec.storage.pvc
in theconfigs.imageregistry/cluster
resource.When using shared storage, review your security settings to prevent outside access.
Verify that you do not have a registry pod:
$ oc get pod -n openshift-image-registry
If the storage type is
emptyDIR
, the replica number cannot be greater than1
.Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
Example output
storage:
pvc:
claim: (1)
1 Leave the claim
field blank to allow the automatic creation of animage-registry-storage
PVC.Check the
clusteroperator
status:$ oc get clusteroperator image-registry
Configuring block registry storage for VMware vSphere
To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate
rollout strategy.
Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. |
Procedure
To set the image registry storage as a block storage type, patch the registry so that it uses the
Recreate
rollout strategy and runs with only1
replica:$ oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}'
Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode.
Create a
pvc.yaml
file with the following contents to define a VMware vSpherePersistentVolumeClaim
object:kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: image-registry-storage (1)
namespace: openshift-image-registry (2)
spec:
accessModes:
- ReadWriteOnce (3)
resources:
requests:
storage: 100Gi (4)
1 A unique name that represents the PersistentVolumeClaim
object.2 The namespace for the PersistentVolumeClaim
object, which isopenshift-image-registry
.3 The access mode of the persistent volume claim. With ReadWriteOnce
, the volume can be mounted with read and write permissions by a single node.4 The size of the persistent volume claim. Create the
PersistentVolumeClaim
object from the file:$ oc create -f pvc.yaml -n openshift-image-registry
Edit the registry configuration so that it references the correct PVC:
$ oc edit config.imageregistry.operator.openshift.io -o yaml
Example output
storage:
pvc:
claim: (1)
1 Creating a custom PVC allows you to leave the claim
field blank for the default automatic creation of animage-registry-storage
PVC.
For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere.
Backing up VMware vSphere volumes
OKD provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information.
Procedure
To create a backup of persistent volumes:
Stop the application that is using the persistent volume.
Clone the persistent volume.
Restart the application.
Create a backup of the cloned volume.
Delete the cloned volume.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
Next steps
If necessary, you can opt out of remote health reporting.
Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.