- Installing a cluster on GCP with customizations
- Prerequisites
- Generating a key pair for cluster node SSH access
- Obtaining the installation program
- Managing user-defined labels and tags for GCP
- Creating the installation configuration file
- Minimum resource requirements for cluster installation
- Tested instance types for GCP
- Tested instance types for GCP on 64-bit ARM infrastructures
- Using custom machine types
- Enabling Shielded VMs
- Enabling Confidential VMs
- Sample customized install-config.yaml file for GCP
- Configuring the cluster-wide proxy during installation
- Installing the OpenShift CLI by downloading the binary
- Alternatives to storing administrator-level secrets in the kube-system project
- Using the GCP Marketplace offering
- Deploying the cluster
- Logging in to the cluster by using the CLI
- Next steps
Installing a cluster on GCP with customizations
In OKD version 4.14, you can install a customized cluster on infrastructure that the installation program provisions on Google Cloud Platform (GCP). To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
Prerequisites
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured a GCP project to host the cluster.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Generating a key pair for cluster node SSH access
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 Specify the path and file name, such as ~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.If you plan to install an OKD cluster that uses the Fedora cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64
,ppc64le
, ands390x
architectures, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.On some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> (1)
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OKD, provide the SSH public key to the installation program.
Obtaining the installation program
Before you install OKD, download the installation file on the host you are using for installation.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Procedure
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use
{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.If you do not use the pull secret from the Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
Managing user-defined labels and tags for GCP
Support for user-defined labels and tags for GCP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
Google Cloud Platform (GCP) provides labels and tags that help to identify and organize the resources created for a specific OKD cluster, making them easier to manage.
You can define labels and tags for each GCP resource only during OKD cluster installation.
User-defined labels and tags are not supported for OKD clusters upgraded to OKD 4.14 version. |
User-defined labels
User-defined labels and OKD specific labels are applied only to resources created by OKD installation program and its core components such as:
GCP filestore CSI Driver Operator
GCP PD CSI Driver Operator
Image Registry Operator
Machine API provider for GCP
User-defined labels and OKD specific labels are not applied on the resources created by any other operators or the Kubernetes in-tree components that create resources, for example, the Ingress load balancers.
User-defined labels and OKD labels are available on the following GCP resources:
Compute disk
Compute instance
Compute image
Compute forwarding rule
DNS managed zone
Filestore instance
Storage bucket
Limitations to user-defined labels
- Labels for
ComputeAddress
are supported in the GCP beta version. OKD does not add labels to the resource.
User-defined tags
User-defined tags are attached to resources created by the OKD Image Registry Operator and not on the resources created by any other Operators or the Kubernetes in-tree components.
User-defined tags are available on the following GCP resources: * Storage bucket
Limitations to the user-defined tags
Tags will not be attached to the following items:
Control plane instances and storage buckets created by the installation program
Compute instances created by the Machine API provider for GCP
Filestore instance resources created by the GCP filestore CSI driver Operator
Compute disk and compute image resources created by the GCP PD CSI driver Operator
Tags are not supported for buckets located in the following regions:
us-east2
us-east3
Image Registry Operator does not throw any error but skips processing tags when the buckets are created in the tags unsupported region.
Tags must not be restricted to particular service accounts, because Operators create and use service accounts with minimal roles.
OKD does not create any key and value resources of the tag.
OKD specific tags are not added to any resource.
Additional resources
For more information about identifying the
OrganizationID
, see: OrganizationIDFor more information about identifying the
ProjectID
, see: ProjectIDFor more information about labels, see Labels Overview.
For more information about tags, see Tags Overview.
Configuring user-defined labels and tags for GCP
Prerequisites
- The installation program requires that a service account includes a
TagUser
role, so that the program can create the OKD cluster with defined tags at both organization and project levels.
Procedure
Update the
install-config.yaml
file to define the list of desired labels and tags.Labels and tags are defined during the
install-config.yaml
creation phase, and cannot be modified or updated with new labels and tags after cluster creation.Sample
install-config.yaml
fileapiVersion: v1
featureSet: TechPreviewNoUpgrade
platform:
gcp:
userLabels: (1)
- key: <label_key>(2)
value: <label_value>(3)
userTags: (4)
- parentID: <OrganizationID/ProjectID>(5)
key: <tag_key_short_name>
value: <tag_value_short_name>
1 Adds keys and values as labels to the resources created on GCP. 2 Defines the label name. 3 Defines the label content. 4 Adds keys and values as tags to the resources created on GCP. 5 The ID of the hierarchical resource where the tags are defined, at the organization or the project level.
The following are the requirements for user-defined labels:
A label key and value must have a minimum of 1 character and can have a maximum of 63 characters.
A label key and value must contain only lowercase letters, numeric characters, underscore (
_
), and dash (-
).A label key must start with a lowercase letter.
You can configure a maximum of 32 labels per resource. Each resource can have a maximum of 64 labels, and 32 labels are reserved for internal use by OKD.
The following are the requirements for user-defined tags:
Tag key and tag value must already exist. OKD does not create the key and the value.
A tag
parentID
can be eitherOrganizationID
orProjectID
:OrganizationID
must consist of decimal numbers without leading zeros.ProjectID
must be 6 to 30 characters in length, that includes only lowercase letters, numbers, and hyphens.ProjectID
must start with a letter, and cannot end with a hyphen.
A tag key must contain only uppercase and lowercase alphanumeric characters, hyphen (
-
), underscore (_
), and period (.
).A tag value must contain only uppercase and lowercase alphanumeric characters, hyphen (
-
), underscore (_
), period (.
), at sign (@
), percent sign (%
), equals sign (=
), plus (+
), colon (:
), comma (,
), asterisk (*
), pound sign ($
), ampersand (&
), parentheses (()
), square braces ([]
), curly braces ({}
), and space.A tag key and value must begin and end with an alphanumeric character.
Tag value must be one of the pre-defined values for the key.
You can configure a maximum of 50 tags.
There should be no tag key defined with the same value as any of the existing tag keys that will be inherited from the parent resource.
Querying user-defined labels and tags for GCP
After creating the OKD cluster, you can access the list of the labels and tags defined for the GCP resources in the infrastructures.config.openshift.io/cluster
object as shown in the following sample infrastructure.yaml
file.
Sample infrastructure.yaml
file
apiVersion: config.openshift.io/v1
kind: Infrastructure
metadata:
name: cluster
spec:
platformSpec:
type: GCP
status:
infrastructureName: <cluster_id>(1)
platform: GCP
platformStatus:
gcp:
resourceLabels:
- key: <label_key>
value: <label_value>
resourceTags:
- key: <tag_key_short_name>
parentID: <OrganizationID/ProjectID>
value: <tag_value_short_name>
type: GCP
1 | The cluster ID that is generated during cluster installation. |
Along with the user-defined labels, resources have a label defined by the OKD. The format of the OKD labels is kubernetes-io-cluster-<cluster_id>:owned
.
Creating the installation configuration file
You can customize the OKD cluster you install on Google Cloud Platform (GCP).
Prerequisites
- You have the OKD installation program and the pull secret for your cluster.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 For <installation_directory>
, specify the directory name to store the files that the installation program creates.When specifying the directory:
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory.Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
Always delete the
~/.powervs
directory to avoid reusing a stale configuration. Run the following command:$ rm -rf ~/.powervs
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select gcp as the platform to target.
If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
Select the region to deploy the cluster to.
Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
Enter a descriptive name for your cluster.
Modify the
install-config.yaml
file. You can find more information about the available parameters in the “Installation configuration parameters” section.If you are installing a three-node cluster, be sure to set the
compute.replicas
parameter to0
. This ensures that the cluster’s control planes are schedulable. For more information, see “Installing a three-node cluster on GCP”.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.The
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
Additional resources
Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap | FCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | FCOS | 4 | 16 GB | 100 GB | 300 |
Compute | FCOS | 2 | 8 GB | 100 GB | 300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OKD and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use Fedora compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of Fedora 7 compute machines is deprecated and has been removed in OKD 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.
Additional resources
Tested instance types for GCP
The following Google Cloud Platform instance types have been tested with OKD.
Machine series
C2
C2D
C3
E2
M1
N1
N2
N2D
Tau T2D
Tested instance types for GCP on 64-bit ARM infrastructures
The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OKD.
Machine series for 64-bit ARM machines
Tau T2A
Using custom machine types
Using a custom machine type to install a OKD cluster is supported.
Consider the following when using a custom machine type:
Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see “Minimum resource requirements for cluster installation”.
The name of the custom machine type must adhere to the following syntax:
custom-<number_of_cpus>-<amount_of_memory_in_mb>
For example,
custom-6-20480
.
As part of the installation process, you specify the custom machine type in the install-config.yaml
file.
Sample install-config.yaml
file with a custom machine type
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
gcp:
type: custom-6-20480
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
gcp:
type: custom-6-20480
replicas: 3
Enabling Shielded VMs
You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google’s documentation on Shielded VMs.
Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. |
Prerequisites
- You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add one of the following stanzas:To use shielded VMs for only control plane machines:
controlPlane:
platform:
gcp:
secureBoot: Enabled
To use shielded VMs for only compute machines:
compute:
- platform:
gcp:
secureBoot: Enabled
To use shielded VMs for all machines:
platform:
gcp:
defaultMachinePlatform:
secureBoot: Enabled
Enabling Confidential VMs
You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google’s documentation on Confidential Computing. You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other.
Confidential VMs are currently not supported on 64-bit ARM architectures. |
Prerequisites
- You have created an
install-config.yaml
file.
Procedure
Use a text editor to edit the
install-config.yaml
file prior to deploying your cluster and add one of the following stanzas:To use confidential VMs for only control plane machines:
controlPlane:
platform:
gcp:
confidentialCompute: Enabled (1)
type: n2d-standard-8 (2)
onHostMaintenance: Terminate (3)
1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types. 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate
, which stops the VM. Confidential VMs do not support live VM migration.To use confidential VMs for only compute machines:
compute:
- platform:
gcp:
confidentialCompute: Enabled
type: n2d-standard-8
onHostMaintenance: Terminate
To use confidential VMs for all machines:
platform:
gcp:
defaultMachinePlatform:
confidentialCompute: Enabled
type: n2d-standard-8
onHostMaintenance: Terminate
Sample customized install-config.yaml file for GCP
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
credentialsMode: Mint (2)
controlPlane: (3) (4)
hyperthreading: Enabled (5)
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey: (6)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (7)
- control-plane-tag1
- control-plane-tag2
osImage: (8)
project: example-project-name
name: example-image-name
replicas: 3
compute: (3) (4)
- hyperthreading: Enabled (5)
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey: (6)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (7)
- compute-tag1
- compute-tag2
osImage: (8)
project: example-project-name
name: example-image-name
replicas: 3
metadata:
name: test-cluster (1)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (9)
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production (1)
region: us-central1 (1)
defaultMachinePlatform:
tags: (7)
- global-tag1
- global-tag2
osImage: (8)
project: example-project-name
name: example-image-name
pullSecret: '{"auths": ...}' (1)
sshKey: ssh-ed25519 AAAA... (10)
1 | Required. The installation program prompts you for this value. | ||
2 | Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the “About the Cloud Credential Operator” section in the Authentication and authorization guide. | ||
3 | If you do not provide these parameters and values, the installation program provides the default value. | ||
4 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. | ||
5 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
| ||
6 | Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see “Machine management” → “Creating compute machine sets” → “Creating a compute machine set on GCP”. | ||
7 | Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. | ||
8 | Optional: A custom Fedora CoreOS (FCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. | ||
9 | The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . | ||
10 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
Additional resources
Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
You have an existing
install-config.yaml
file.You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.The
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
.2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with .
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations.4 If provided, the installation program generates a config map that is named user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.5 Optional: The policy to determine the configuration of the Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.The installation program does not support the proxy
readinessEndpoints
field.If the installer times out, restart and then complete the deployment by using the
wait-for
command of the installer. For example:$ ./openshift-install wait-for install-complete —log-level debug
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the |
Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OKD from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack the archive:
$ tar xvf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.zip
.Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system
project. If you configured the credentialsMode
parameter in the install-config.yaml
file to Manual
, you must use one of the following alternatives:
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials.
Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system
namespace.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
custom resources (CRs) from the OKD release image by running the following command:$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 The —included
parameter includes only the manifests that your specific cluster configuration requires.2 Specify the location of the install-config.yaml
file.3 Specify the path to the directory where you want to store the CredentialsRequest
objects. If the specified directory does not exist, this command creates it.This command creates a YAML file for each
CredentialsRequest
object.Sample
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: GCPProviderSpec
predefinedRoles:
- roles/storage.admin
- roles/iam.serviceAccountUser
skipServiceCheck: true
...
Create YAML files for secrets in the
openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretRef
for eachCredentialsRequest
object.Sample
CredentialsRequest
object with secretsapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
name: <component_credentials_request>
namespace: openshift-cloud-credential-operator
...
spec:
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
...
secretRef:
name: <component_secret>
namespace: <component_namespace>
...
Sample
Secret
objectapiVersion: v1
kind: Secret
metadata:
name: <component_secret>
namespace: <component_namespace>
data:
service_account.json: <base64_encoded_gcp_service_account_file>
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. |
Configuring a GCP cluster to use short-term credentials
To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster.
Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl
) binary.
The |
Prerequisites
You have access to an OKD account with cluster administrator access.
You have installed the OpenShift CLI (
oc
).
Procedure
Obtain the OKD release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Obtain the CCO container image from the OKD release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)
Ensure that the architecture of the
$RELEASE_IMAGE
matches the architecture of the environment in which you will use theccoctl
tool.Extract the
ccoctl
binary from the CCO container image within the OKD release image by running the following command:$ oc image extract $CCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret
Change the permissions to make
ccoctl
executable by running the following command:$ chmod 775 ccoctl
Verification
To verify that
ccoctl
is ready to use, display the help file by running the following command:$ ccoctl --help
Output of
ccoctl --help
OpenShift credentials provisioning tool
Usage:
ccoctl [command]
Available Commands:
alibabacloud Manage credentials objects for alibaba cloud
aws Manage credentials objects for AWS cloud
azure Manage credentials objects for Azure
gcp Manage credentials objects for Google cloud
help Help about any command
ibmcloud Manage credentials objects for IBM Cloud
nutanix Manage credentials objects for Nutanix
Flags:
-h, --help help for ccoctl
Use "ccoctl [command] --help" for more information about a command.
Creating GCP resources with the Cloud Credential Operator utility
You can use the ccoctl gcp create-all
command to automate the creation of GCP resources.
By default, |
Prerequisites
You must have:
- Extracted and prepared the
ccoctl
binary.
Procedure
Set a
$RELEASE_IMAGE
variable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of
CredentialsRequest
objects from the OKD release image by running the following command:$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 The —included
parameter includes only the manifests that your specific cluster configuration requires.2 Specify the location of the install-config.yaml
file.3 Specify the path to the directory where you want to store the CredentialsRequest
objects. If the specified directory does not exist, this command creates it.This command might take a few moments to run.
Use the
ccoctl
tool to process allCredentialsRequest
objects by running the following command:$ ccoctl gcp create-all \
--name=<name> \(1)
--region=<gcp_region> \(2)
--project=<gcp_project_id> \(3)
--credentials-requests-dir=<path_to_credentials_requests_directory> (4)
1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest
manifests to create GCP service accounts.If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgrade
feature set, you must include the—enable-tech-preview
parameter.
Verification
To verify that the OKD secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifests
directory:$ ls <path_to_ccoctl_output_dir>/manifests
Example output
cluster-authentication-02-config.yaml
openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml
openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml
openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml
openshift-image-registry-installer-cloud-credentials-credentials.yaml
openshift-ingress-operator-cloud-credentials-credentials.yaml
openshift-machine-api-gcp-cloud-credentials-credentials.yaml
You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts.
Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl
) created to the correct directories for the installation program.
Prerequisites
You have configured an account with the cloud platform that hosts your cluster.
You have configured the Cloud Credential Operator utility (
ccoctl
).You have created the cloud provider resources that are required for your cluster with the
ccoctl
utility.
Procedure
If you did not set the
credentialsMode
parameter in theinstall-config.yaml
configuration file toManual
, modify the value as shown:Sample configuration file snippet
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
# ...
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests
Copy the manifests that the
ccoctl
utility generated to themanifests
directory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/
Copy the private key that the
ccoctl
utility generated in thetls
directory to the installation directory by running the following command:$ cp -a /<path_to_ccoctl_output_dir>/tls .
Using the GCP Marketplace offering
Using the GCP Marketplace offering lets you deploy an OKD cluster, which is billed on pay-per-use basis (hourly, per core) through GCP, while still being supported directly by Red Hat.
By default, the installation program downloads and installs the Fedora CoreOS (FCOS) image that is used to deploy compute machines. To deploy an OKD cluster using an FCOS image from the GCP Marketplace, override the default behavior by modifying the install-config.yaml
file to reference the location of GCP Marketplace offer.
Prerequisites
- You have an existing
install-config.yaml
file.
Procedure
Edit the
compute.platform.gcp.osImage
parameters to specify the location of the GCP Marketplace image:Set the
project
parameter toredhat-marketplace-public
Set the
name
parameter to one of the following offers:OKD
redhat-coreos-ocp-413-x86-64-202305021736
OpenShift Platform Plus
redhat-coreos-opp-413-x86-64-202305021736
OpenShift Kubernetes Engine
redhat-coreos-oke-413-x86-64-202305021736
Save the file and reference it when deploying the cluster.
Sample install-config.yaml
file that specifies a GCP Marketplace image for compute machines
apiVersion: v1
baseDomain: example.com
controlPlane:
# ...
compute:
platform:
gcp:
osImage:
project: redhat-marketplace-public
name: redhat-coreos-ocp-413-x86-64-202305021736
# ...
Deploying the cluster
You can install OKD on a compatible cloud platform.
You can run the |
Prerequisites
You have configured an account with the cloud platform that hosts your cluster.
You have the OKD installation program and the pull secret for your cluster.
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Procedure
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
The
GOOGLE_CREDENTIALS
,GOOGLE_CLOUD_KEYFILE_JSON
, orGCLOUD_KEYFILE_JSON
environment variablesThe
~/.gcp/osServiceAccount.json
fileThe
gcloud cli
default credentials
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 For <installation_directory>
, specify the location of your customized./install-config.yaml
file.2 To view different installation details, specify warn
,debug
, orerror
instead ofinfo
.Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
If you assigned the
Owner
role to your service account, you can remove that role and replace it with theViewer
role.If you included the
Service Account Key Admin
role, you can remove it.
Verification
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user.Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.
Prerequisites
You deployed an OKD cluster.
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 For <installation_directory>
, specify the path to the directory that you stored the installation files in.Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OKD web console.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
Next steps
If necessary, you can opt out of remote health reporting.