- Installing a cluster on GCP with network customizations
- Prerequisites
- Generating a key pair for cluster node SSH access
- Obtaining the installation program
- Creating the installation configuration file
- Additional resources
- Installing the OpenShift CLI by downloading the binary
- Alternatives to storing administrator-level secrets in the kube-system project
- Network configuration phases
- Specifying advanced network configuration
- Cluster Network Operator configuration
- Deploying the cluster
- Logging in to the cluster by using the CLI
- Next steps
Installing a cluster on GCP with network customizations
In OKD version 4, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
Prerequisites
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured a GCP project to host the cluster.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Generating a key pair for cluster node SSH access
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 Specify the path and file name, such as ~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.If you plan to install an OKD cluster that uses the Fedora cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.On some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> (1)
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OKD, provide the SSH public key to the installation program.
Obtaining the installation program
Before you install OKD, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use
{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.If you do not use the pull secret from the Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Ecosystem Catalog Container images registry, such as image streams and Operators, are not available.
Creating the installation configuration file
You can customize the OKD cluster you install on Google Cloud Platform (GCP).
Prerequisites
- You have the OKD installation program and the pull secret for your cluster.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 For <installation_directory>
, specify the directory name to store the files that the installation program creates.When specifying the directory:
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory.Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
Always delete the
~/.powervs
directory to avoid reusing a stale configuration. Run the following command:$ rm -rf ~/.powervs
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select gcp as the platform to target.
If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
Select the region to deploy the cluster to.
Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
Enter a descriptive name for your cluster.
Modify the
install-config.yaml
file. You can find more information about the available parameters in the “Installation configuration parameters” section.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.The
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
Additional resources
Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | FCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | FCOS | 4 | 16 GB | 100 GB | 300 |
Compute | FCOS | 2 | 8 GB | 100 GB | 300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OKD and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use Fedora compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of Fedora 7 compute machines is deprecated and has been removed in OKD 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.
Additional resources
Tested instance types for GCP
The following Google Cloud Platform instance types have been tested with OKD.
Machine series
C2
C2D
C3
E2
M1
N1
N2
N2D
Tau T2D
Tested instance types for GCP on 64-bit ARM infrastructures
The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OKD.
Machine series for 64-bit ARM machines
Tau T2A
Using custom machine types
Using a custom machine type to install a OKD cluster is supported.
Consider the following when using a custom machine type:
Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see “Minimum resource requirements for cluster installation”.
The name of the custom machine type must adhere to the following syntax:
custom-<number_of_cpus>-<amount_of_memory_in_mb>
For example,
custom-6-20480
.
As part of the installation process, you specify the custom machine type in the install-config.yaml
file.
Sample install-config.yaml
file with a custom machine type
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
gcp:
type: custom-6-20480
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
gcp:
type: custom-6-20480
replicas: 3
Enabling Shielded VMs
You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google’s documentation on Shielded VMs.
Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. |
Prerequisites
- You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add one of the following stanzas:To use shielded VMs for only control plane machines:
controlPlane:
platform:
gcp:
secureBoot: Enabled
To use shielded VMs for only compute machines:
compute:
- platform:
gcp:
secureBoot: Enabled
To use shielded VMs for all machines:
platform:
gcp:
defaultMachinePlatform:
secureBoot: Enabled
Enabling Confidential VMs
You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google’s documentation on Confidential Computing. You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other.
Confidential VMs are currently not supported on 64-bit ARM architectures. |
Prerequisites
- You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add one of the following stanzas:To use confidential VMs for only control plane machines:
controlPlane:
platform:
gcp:
confidentialCompute: Enabled (1)
type: n2d-standard-8 (2)
onHostMaintenance: Terminate (3)
1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types. 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate
, which stops the VM. Confidential VMs do not support live VM migration.To use confidential VMs for only compute machines:
compute:
- platform:
gcp:
confidentialCompute: Enabled
type: n2d-standard-8
onHostMaintenance: Terminate
To use confidential VMs for all machines:
platform:
gcp:
defaultMachinePlatform:
confidentialCompute: Enabled
type: n2d-standard-8
onHostMaintenance: Terminate
Sample customized install-config.yaml file for GCP
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
credentialsMode: Mint (2)
controlPlane: (3) (4)
hyperthreading: Enabled (5)
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey: (6)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (7)
- control-plane-tag1
- control-plane-tag2
osImage: (8)
project: example-project-name
name: example-image-name
replicas: 3
compute: (3) (4)
- hyperthreading: Enabled (5)
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey: (6)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (7)
- compute-tag1
- compute-tag2
osImage: (8)
project: example-project-name
name: example-image-name
replicas: 3
metadata:
name: test-cluster (1)
networking: (3)
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (9)
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production (1)
region: us-central1 (1)
defaultMachinePlatform:
tags: (7)
- global-tag1
- global-tag2
osImage: (8)
project: example-project-name
name: example-image-name
pullSecret: '{"auths": ...}' (1)
sshKey: ssh-ed25519 AAAA... (10)
1 | Required. The installation program prompts you for this value. | ||
2 | Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the “About the Cloud Credential Operator” section in the Authentication and authorization guide. | ||
3 | If you do not provide these parameters and values, the installation program provides the default value. | ||
4 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. | ||
5 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
| ||
6 | Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see “Machine management” → “Creating compute machine sets” → “Creating a compute machine set on GCP”. | ||
7 | Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. | ||
8 | Optional: A custom Fedora CoreOS (FCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. | ||
9 | The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . | ||
10 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
Additional resources
Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
You have an existing
install-config.yaml
file.You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.The
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
.2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with .
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations.4 If provided, the installation program generates a config map that is named user-ca-bundle
in theopenshift-config
namespace to hold the additional CA certificates. If you provideadditionalTrustBundle
and at least one proxy setting, theProxy
object is configured to reference theuser-ca-bundle
config map in thetrustedCA
field. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges the contents specified for thetrustedCA
parameter with the FCOS trust bundle. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.5 Optional: The policy to determine the configuration of the Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.The installation program does not support the proxy
readinessEndpoints
field.If the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete —log-level debug
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the |
Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OKD from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.zip
.Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
Verification
After you install the OpenShift CLI, it is available using the
oc
command:C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
Verification
After you install the OpenShift CLI, it is available using the
oc
command:$ oc <command>
Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials.
Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
custom resources (CRs) from the OKD release image by running the following command:$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 The —included
parameter includes only the manifests that your specific cluster configuration requires.2 Specify the location of the install-config.yaml
file.3 Specify the path to the directory where you want to store the CredentialsRequest
objects. If the specified directory does not exist, this command creates it.This command creates a YAML file for each
CredentialsRequest
object.Sample
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: GCPProviderSpec
predefinedRoles:
- roles/storage.admin
- roles/iam.serviceAccountUser
skipServiceCheck: true
...
Create YAML files for secrets in the
openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretRef
for eachCredentialsRequest
object.Sample
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
...
secretRef:
name: <component_secret>
namespace: <component_namespace>
...
Sample
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
service_account.json: <base64_encoded_gcp_service_account_file>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. |
Configuring a GCP cluster to use short-term credentials
To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster.
Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The |
Prerequisites
You have access to an OKD account with cluster administrator access.
You have installed the OpenShift CLI (
oc
).
Procedure
Obtain the OKD release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OKD release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
Ensure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OKD release image by running the following command:$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
Change the permissions to make
ccoctl
executable by running the following command:$ chmod 775 ccoctl
Verification
To verify that
ccoctl
is ready to use, display the help file by running the following command:$ ccoctl --help
Output of
ccoctl --help
OpenShift credentials provisioning tool
Usage:
ccoctl [command]
Available Commands:
alibabacloud Manage credentials objects for alibaba cloud
aws Manage credentials objects for AWS cloud
azure Manage credentials objects for Azure
gcp Manage credentials objects for Google cloud
help Help about any command
ibmcloud Manage credentials objects for IBM Cloud
nutanix Manage credentials objects for Nutanix
Flags:
-h, --help help for ccoctl
Use "ccoctl [command] --help" for more information about a command.
Creating GCP resources with the Cloud Credential Operator utility
You can use the ccoctl gcp create-all
command to automate the creation of GCP resources.
By default, |
Prerequisites
You must have:
- Extracted and prepared the
ccoctl
binary.
Procedure
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
objects from the OKD release image by running the following command:$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 The —included
parameter includes only the manifests that your specific cluster configuration requires.2 Specify the location of the install-config.yaml
file.3 Specify the path to the directory where you want to store the CredentialsRequest
objects. If the specified directory does not exist, this command creates it.This command might take a few moments to run.
Use the
ccoctl
tool to process allCredentialsRequest
objects by running the following command:$ ccoctl gcp create-all \
--name=<name> \(1)
--region=<gcp_region> \(2)
--project=<gcp_project_id> \(3)
--credentials-requests-dir=<path_to_credentials_requests_directory> (4)
1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest
manifests to create GCP service accounts.If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the—enable-tech-preview
parameter.
Verification
To verify that the OKD secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifests
directory:$ ls <path_to_ccoctl_output_dir>/manifests
Example output
cluster-authentication-02-config.yaml
openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml
openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml
openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml
openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml
openshift-image-registry-installer-cloud-credentials-credentials.yaml
openshift-ingress-operator-cloud-credentials-credentials.yaml
openshift-machine-api-gcp-cloud-credentials-credentials.yaml
You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts.
Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
Prerequisites
You have configured an account with the cloud platform that hosts your cluster.
You have configured the Cloud Credential Operator utility (
ccoctl
).You have created the cloud provider resources that are required for your cluster with the
ccoctl
utility.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>
where
<installation_directory>
is the directory in which the installation program creates files.Copy the manifests that the
ccoctl
utility generated to themanifests
directory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the private key that the
ccoctl
utility generated in thetls
directory to the installation directory by running the following command:$ cp -a /<path_to_ccoctl_output_dir>/tls .
Network configuration phases
There are two phases prior to OKD installation where you can customize the network configuration.
Phase 1
You can customize the following network-related fields in the install-config.yaml
file before you create the manifest files:
networking.networkType
networking.clusterNetwork
networking.serviceNetwork
networking.machineNetwork
For more information on these fields, refer to Installation configuration parameters.
Set the
networking.machineNetwork
to match the CIDR that the preferred NIC resides in.The CIDR range
172.17.0.0/16
is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster.
Phase 2
After creating the manifest files by running openshift-install create manifests
, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml
file during phase 2. However, you can further customize the network plugin during phase 2.
Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OKD manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. |
Prerequisites
- You have created the
install-config.yaml
file and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 <installation_directory>
specifies the name of the directory that contains theinstall-config.yaml
file for your cluster.Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.yml
file, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
openshiftSDNConfig:
vxlanPort: 4800
Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
ipsecConfig: {}
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program consumes themanifests/
directory when you create the Ignition config files.
Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group and these fields cannot be changed:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
| The name of the CNO object. This name is always |
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:
You can customize this field only in the |
|
| Configures the network plugin for the cluster network. |
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
| Either
| ||
|
| This object is only valid for the OpenShift SDN network plugin. | ||
|
| This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OpenShift SDN network plugin
The following table describes the configuration fields for the OpenShift SDN network plugin:
Field | Type | Description |
---|---|---|
|
| Configures the network isolation mode for OpenShift SDN. The default value is The values |
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
| The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description | ||
---|---|---|---|---|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to | ||
|
| The port to use for all Geneve packets. The default value is | ||
|
| Specify an empty object to enable IPsec encryption. | ||
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. | ||
|
| Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.
| ||
| If your existing network infrastructure overlaps with the This field cannot be changed after installation. | The default value is | ||
| If your existing network infrastructure overlaps with the This field cannot be changed after installation. | The default value is |
Field | Type | Description |
---|---|---|
| integer | The maximum number of messages to generate every second per node. The default value is |
| integer | The maximum size for the audit log in bytes. The default value is |
| string | One of the following additional audit log targets:
|
| string | The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
| Set this field to This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to |
|
| You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the |
Example OVN-Kubernetes configuration with IPSec enabled
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
| The refresh period for
| ||
|
| The minimum duration before refreshing
|
Deploying the cluster
You can install OKD on a compatible cloud platform.
You can run the |
Prerequisites
You have configured an account with the cloud platform that hosts your cluster.
You have the OKD installation program and the pull secret for your cluster.
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Procedure
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
The
GOOGLE_CREDENTIALS
,GOOGLE_CLOUD_KEYFILE_JSON
, orGCLOUD_KEYFILE_JSON
environment variablesThe
~/.gcp/osServiceAccount.json
fileThe
gcloud cli
default credentials
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 For <installation_directory>
, specify the location of your customized./install-config.yaml
file.2 To view different installation details, specify warn
,debug
, orerror
instead ofinfo
.Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
If you assigned the
Owner
role to your service account, you can remove that role and replace it with theViewer
role.If you included the
Service Account Key Admin
role, you can remove it.
Verification
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user.Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.
Prerequisites
You deployed an OKD cluster.
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 For <installation_directory>
, specify the path to the directory that you stored the installation files in.Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OKD web console.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
Next steps
If necessary, you can opt out of remote health reporting.