- Installing a cluster on GCP in a restricted network
- Prerequisites
- About installations in restricted networks
- Generating a key pair for cluster node SSH access
- Creating the installation configuration file
- Installation configuration parameters
- Minimum resource requirements for cluster installation
- Tested instance types for GCP
- Using custom machine types
- Sample customized install-config.yaml file for GCP
- Create an Ingress Controller with global access on GCP
- Configuring the cluster-wide proxy during installation
- Deploying the cluster
- Installing the OpenShift CLI by downloading the binary
- Logging in to the cluster by using the CLI
- Disabling the default OperatorHub catalog sources
- Next steps
Installing a cluster on GCP in a restricted network
In OKD 4.12, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC).
You can install an OKD cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. |
Prerequisites
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You mirrored the images for a disconnected installation to your registry and obtained the
imageContentSources
data for your version of OKD.Because the installation media is on the mirror host, you can use that computer to complete all installation steps.
You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements:
Contains the mirror registry
Has firewall rules or a peering connection to access the mirror registry hosted elsewhere
If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to
*.googleapis.com
andaccounts.google.com
.If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-system
namespace, you can manually create and maintain IAM credentials.
About installations in restricted networks
In OKD 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OKD registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Additional limits
Clusters in restricted networks have the following additional limitations and restrictions:
The
ClusterVersion
status includes anUnable to retrieve available updates
error.By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
Generating a key pair for cluster node SSH access
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 Specify the path and file name, such as ~/.ssh/id_ed25519
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64
architecture, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the
~/.ssh/id_ed25519.pub
public key:$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gather
command.On some distributions, default SSH private key identities such as
~/.ssh/id_rsa
and~/.ssh/id_dsa
are managed automatically.If the
ssh-agent
process is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> (1)
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OKD, provide the SSH public key to the installation program.
Creating the installation configuration file
You can customize the OKD cluster you install on Google Cloud Platform (GCP).
Prerequisites
Obtain the OKD installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
Have the
imageContentSources
values that were generated during mirror registry creation.Obtain the contents of the certificate for your mirror registry.
Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yaml
file.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 For <installation_directory>
, specify the directory name to store the files that the installation program creates.When specifying the directory:
Verify that the directory has the
execute
permission. This permission is required to run Terraform binaries under the installation directory.Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agent
process uses.Select gcp as the platform to target.
If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
Select the region to deploy the cluster to.
Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
Enter a descriptive name for your cluster.
Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.
Edit the
install-config.yaml
file to give the additional information that is required for an installation in a restricted network.Update the
pullSecret
value to contain the authentication information for your registry:pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'
For
<mirror_host_name>
, specify the registry domain name that you specified in the certificate for your mirror registry, and for<credentials>
, specify the base64-encoded user name and password for your mirror registry.Add the
additionalTrustBundle
parameter and value.additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.
Define the network and subnets for the VPC to install the cluster in under the parent
platform.gcp
field:network: <existing_vpc>
controlPlaneSubnet: <control_plane_subnet>
computeSubnet: <compute_subnet>
For
platform.gcp.network
, specify the name for the existing Google VPC. Forplatform.gcp.controlPlaneSubnet
andplatform.gcp.computeSubnet
, specify the existing subnets to deploy the control plane machines and compute machines, respectively.Add the image content resources, which resemble the following YAML excerpt:
imageContentSources:
- mirrors:
- <mirror_host_name>:5000/<repo_name>/release
source: quay.example.com/openshift-release-dev/ocp-release
- mirrors:
- <mirror_host_name>:5000/<repo_name>/release
source: registry.example.com/ocp/release
For these values, use the
imageContentSources
that you recorded during mirror registry creation.
Make any other modifications to the
install-config.yaml
file that you require. You can find more information about the available parameters in the Installation configuration parameters section.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.The
install-config.yaml
file is consumed during the installation process. If you want to reuse the file, you must back it up now.
Installation configuration parameters
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
Required configuration parameters
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The API version for the | String |
| The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the | A fully-qualified domain or subdomain name, such as |
| Kubernetes resource | Object |
| The name of the cluster. DNS records for the cluster are all subdomains of | String of lowercase letters, hyphens ( |
| The configuration for the specific platform upon which to perform the installation: | Object |
Network configuration parameters
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. |
Parameter | Description | Values | ||
---|---|---|---|---|
| The configuration for the cluster network. | Object
| ||
| The Red Hat OpenShift Networking network plugin to install. | Either | ||
| The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use An IPv4 network. | An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between | ||
| The subnet prefix length to assign to each individual node. For example, if | A subnet prefix. The default value is | ||
| The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
| ||
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use | An IP network block in CIDR notation. For example,
|
Optional configuration parameters
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
| A PEM-encoded X.509 certificate bundle that is added to the nodes’ trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String | ||
| Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components. For more information, see the “Cluster capabilities” page in Installing. | String array | ||
| Selects an initial set of optional capabilities to enable. Valid values are | String | ||
| Extends the set of optional capabilities beyond what you specify in | String array | ||
| The configuration for the machines that comprise the compute nodes. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of compute machines, which are also known as worker machines, to provision. | A positive integer greater than or equal to | ||
| Enables the cluster for a feature set. A feature set is a collection of OKD features that are not enabled by default. For more information about enabling a feature set during installation, see “Enabling features using feature gates”. | String. The name of the feature set to enable, such as | ||
| The configuration for the machines that comprise the control plane. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of control plane machines to provision. | The only supported value is | ||
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC),
|
| ||
| Sources and repositories for the release-image content. | Array of objects. Includes a | ||
| Required if you use | String | ||
| Specify one or more repositories that may also contain the same images. | Array of strings | ||
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
| ||
| The SSH key or keys to authenticate access your cluster machines.
| One or more keys. For example:
|
Additional Google Cloud Platform (GCP) configuration parameters
Additional GCP configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
| The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set | String. | ||
| Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. | String. | ||
| The name of the GCP project where the installation program installs the cluster. | String. | ||
| The name of the GCP region that hosts your cluster. | Any valid region name, such as | ||
| The name of the existing subnet where you want to deploy your control plane machines. | The subnet name. | ||
| The name of the existing subnet where you want to deploy your compute machines. | The subnet name. | ||
| Optional. Set this value to |
| ||
| Optional. The name of the project that contains the public DNS zone. If you set this value, your service account must have the | The name of the project that contains the public DNS zone. | ||
| Optional. The ID or name of an existing public DNS zone. The public DNS zone domain must match the | The public DNS zone name. | ||
| Optional. The name of the project that contains the private DNS zone. If you set this value, your service account must have the | The name of the project that contains the private DNS zone. | ||
| Optional. The ID or name of an existing private DNS zone. If you do not set this value, the installation program will create a private DNS zone in the service project. | The private DNS zone name. | ||
| A list of license URLs that must be applied to the compute images.
| Any license available with the license API, such as the license to enable nested virtualization. You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use. | ||
| The availability zones where the installation program creates machines. | A list of valid GCP availability zones, such as | ||
| The size of the disk in gigabytes (GB). | Any size between 16 GB and 65536 GB. | ||
| The GCP disk type. | Either the default | ||
| Optional. Additional network tags to add to the control plane and compute machines. | One or more strings, for example | ||
| The GCP machine type for control plane and compute machines. | The GCP machine type, for example | ||
| The name of the customer managed encryption key to be used for machine disk encryption. | The encryption key name. | ||
| The name of the Key Management Service (KMS) key ring to which the KMS key belongs. | The KMS key ring name. | ||
| The GCP location in which the KMS key ring exists. | The GCP location. | ||
| The ID of the project in which the KMS key ring exists. This value defaults to the value of the | The GCP project ID. | ||
| The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google’s documentation on service accounts. | The GCP service account email, for example | ||
| The name of the customer managed encryption key to be used for control plane machine disk encryption. | The encryption key name. | ||
| For control plane machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. | ||
| For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. | ||
| For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. | ||
| The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google’s documentation on service accounts. | The GCP service account email, for example | ||
| The size of the disk in gigabytes (GB). This value applies to control plane machines. | Any integer between 16 and 65536. | ||
| The GCP disk type for control plane machines. | Control plane machines must use the | ||
| Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the | One or more strings, for example | ||
| The GCP machine type for control plane machines. If set, this parameter overrides the | The GCP machine type, for example | ||
| The availability zones where the installation program creates control plane machines. | A list of valid GCP availability zones, such as | ||
| The name of the customer managed encryption key to be used for compute machine disk encryption. | The encryption key name. | ||
| For compute machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. | ||
| For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. | ||
| For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. | ||
| The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google’s documentation on service accounts. | The GCP service account email, for example | ||
| The size of the disk in gigabytes (GB). This value applies to compute machines. | Any integer between 16 and 65536. | ||
| The GCP disk type for compute machines. | Either the default | ||
| Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the | One or more strings, for example | ||
| The GCP machine type for compute machines. If set, this parameter overrides the | The GCP machine type, for example | ||
| The availability zones where the installation program creates compute machines. | A list of valid GCP availability zones, such as |
Minimum resource requirements for cluster installation
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | IOPS [2] |
---|---|---|---|---|---|
Bootstrap | FCOS | 4 | 16 GB | 100 GB | 300 |
Control plane | FCOS | 4 | 16 GB | 100 GB | 300 |
Compute | FCOS | 2 | 8 GB | 100 GB | 300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OKD and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use Fedora compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of Fedora 7 compute machines is deprecated and has been removed in OKD 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OKD.
Tested instance types for GCP
The following Google Cloud Platform instance types have been tested with OKD.
Machine series
C2
E2
M1
N1
N2
N2D
Tau T2D
Using custom machine types
Using a custom machine type to install a OKD cluster is supported.
Consider the following when using a custom machine type:
Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see “Minimum resource requirements for cluster installation”.
The name of the custom machine type must adhere to the following syntax:
custom-<number_of_cpus>-<amount_of_memory_in_mb>
For example,
custom-6-20480
.
As part of the installation process, you specify the custom machine type in the install-config.yaml
file.
Sample install-config.yaml
file with a custom machine type
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
gcp:
type: custom-6-20480
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
gcp:
type: custom-6-20480
replicas: 3
Sample customized install-config.yaml file for GCP
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
controlPlane: (2) (3)
hyperthreading: Enabled (4)
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey: (5)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (6)
- control-plane-tag1
- control-plane-tag2
replicas: 3
compute: (2) (3)
- hyperthreading: Enabled (4)
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey: (5)
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
tags: (6)
- compute-tag1
- compute-tag2
replicas: 3
metadata:
name: test-cluster (1)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (7)
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production (1)
region: us-central1 (1)
defaultMachinePlatform:
tags: (6)
- global-tag1
- global-tag2
network: existing_vpc (8)
controlPlaneSubnet: control_plane_subnet (9)
computeSubnet: compute_subnet (10)
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}' (11)
sshKey: ssh-ed25519 AAAA... (12)
additionalTrustBundle: | (13)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
imageContentSources: (14)
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
1 | Required. The installation program prompts you for this value. | ||
2 | If you do not provide these parameters and values, the installation program provides the default value. | ||
3 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. | ||
4 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
| ||
5 | Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see “Machine management” → “Creating compute machine sets” → “Creating a compute machine set on GCP”. | ||
6 | Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. | ||
7 | The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . | ||
8 | Specify the name of an existing VPC. | ||
9 | Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. | ||
10 | Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. | ||
11 | For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. | ||
12 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
| ||
13 | Provide the contents of the certificate file that you used for your mirror registry. | ||
14 | Provide the imageContentSources section from the output of the command to mirror the repository. |
Create an Ingress Controller with global access on GCP
You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers.
Prerequisites
- You created the
install-config.yaml
and complete any modifications to it.
Procedure
Create an Ingress Controller with global access on a new GCP cluster.
Change to the directory that contains the installation program and create a manifest file:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 For <installation_directory>
, specify the name of the directory that contains theinstall-config.yaml
file for your cluster.After creating the file, several network configuration files are in the
manifests/
directory, as shown:$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml
Example output
cluster-ingress-default-ingresscontroller.yaml
Open the
cluster-ingress-default-ingresscontroller.yaml
file in an editor and enter a custom resource (CR) that describes the Operator configuration you want:Sample
clientAccess
configuration toGlobal
spec:
endpointPublishingStrategy:
loadBalancer:
providerParameters:
gcp:
clientAccess: Global (1)
type: GCP
scope: Internal (2)
type: LoadBalancerService
1 Set gcp.clientAccess
toGlobal
.2 Global access is only available to Ingress Controllers using internal load balancers.
Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
You have an existing
install-config.yaml
file.You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.The
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
.2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with .
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass the proxy for all destinations.4 If provided, the installation program generates a config map that is named user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in thetrustedCA
field of theProxy
object. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.5 Optional: The policy to determine the configuration of the Proxy
object to reference theuser-ca-bundle
config map in thetrustedCA
field. The allowed values areProxyonly
andAlways
. UseProxyonly
to reference theuser-ca-bundle
config map only whenhttp/https
proxy is configured. UseAlways
to always reference theuser-ca-bundle
config map. The default value isProxyonly
.The installation program does not support the proxy
readinessEndpoints
field.Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the |
Deploying the cluster
You can install OKD on a compatible cloud platform.
You can run the |
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OKD installation program and the pull secret for your cluster.
Procedure
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
The
GOOGLE_CREDENTIALS
,GOOGLE_CLOUD_KEYFILE_JSON
, orGCLOUD_KEYFILE_JSON
environment variablesThe
~/.gcp/osServiceAccount.json
fileThe
gcloud cli
default credentials
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 For <installation_directory>
, specify the location of your customized./install-config.yaml
file.2 To view different installation details, specify warn
,debug
, orerror
instead ofinfo
.If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
If you assigned the
Owner
role to your service account, you can remove that role and replace it with theViewer
role.If you included the
Service Account Key Admin
role, you can remove it.
Verification
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadmin
user.Credential information also outputs to
<installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
|
Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OKD from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
Navigate to link:https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
Navigate to link:https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.zip
.Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
Navigate to link:https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.
Prerequisites
You deployed an OKD cluster.
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 For <installation_directory>
, specify the path to the directory that you stored the installation files in.Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Disabling the default OperatorHub catalog sources
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OKD installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: true
to theOperatorHub
object:$ oc patch OperatorHub cluster --type json \
-p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. |
Additional resources
- See About remote health monitoring for more information about the Telemetry service
Next steps
Configure image streams for the Cluster Samples Operator and the
must-gather
tool.Learn how to use Operator Lifecycle Manager (OLM) on restricted networks.
If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores.
If necessary, you can opt out of remote health reporting.