- Installing a private cluster on Azure
- Prerequisites
- Private clusters
- About reusing a VNet for your OKD cluster
- Generating an SSH private key and adding it to the agent
- Obtaining the installation program
- Manually creating the installation configuration file
- Deploying the cluster
- Installing the OpenShift CLI by downloading the binary
- Logging in to the cluster by using the CLI
- Next steps
Installing a private cluster on Azure
In OKD version 4.7, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml
file before you install the cluster.
Prerequisites
Review details about the OKD installation and update processes.
Configure an Azure account to host the cluster and determine the tested and validated region to deploy the cluster to.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials. Manual mode can also be used in environments where the cloud IAM APIs are not reachable.
Private clusters
You can deploy a private OKD cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the Internet.
By default, OKD is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
Private clusters in Azure
To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.
Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16
internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.
The cluster still requires access to Internet to access the Azure APIs.
The following items are not required or created when you install a private cluster:
A
BaseDomainResourceGroup
, since the cluster does not create public recordsPublic IP addresses
Public DNS records
Public endpoints
The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
Limitations
Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet.
User-defined outbound routing
In OKD, you can choose your own outbound routing for a cluster to connect to the Internet. This allows you to skip the creation of public IP addresses and the public load balancer.
You can configure user-defined routing by modifying parameters in the install-config.yaml
file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.
When configuring a cluster to use user-defined routing, the installation program does not create the following resources:
Outbound rules for access to the Internet.
Public IPs for the public load balancer.
Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.
You must ensure the following items are available before setting user-defined routing:
Egress to the Internet is possible to pull container images, unless using an internal registry mirror.
The cluster can access Azure APIs.
Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.
There are several pre-existing networking setups that are supported for Internet access using user-defined routing.
Private cluster with network address translation
You can use Azure VNET network address translation (NAT) to provide outbound Internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.
When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.
Private cluster with Azure Firewall
You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.
When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.
Private cluster with a proxy configuration
You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.
When using the default route table for subnets, with 0.0.0.0/0
populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.
Private cluster with no Internet access
You can have VNets with no access to the Internet if your cluster has access to the following:
An internal registry mirror that allows for pulling container images
Access to Azure APIs
With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.
About reusing a VNet for your OKD cluster
In OKD 4.7, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.
By deploying OKD into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.
The use of an existing VNet requires the use of the updated Azure Private DNS (preview) feature. See Announcing Preview Refresh for Azure DNS Private Zones for more information about the limitations of this feature. |
Requirements for using your VNet
When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:
Subnets
Route tables
VNets
Network Security Groups
If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.
The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.
Your VNet must meet the following characteristics:
The VNet’s CIDR block must contain the
Networking.MachineCIDR
range, which is the IP address pool for cluster machines.The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
All the specified subnets exist.
There are two private subnets, one for the control plane machines and one for the compute machines.
The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VNet, the VNet is not deleted. |
Network security group requirements
The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.
The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. |
Port | Description | Control plane | Compute |
---|---|---|---|
| Allows HTTP traffic | x | |
| Allows HTTPS traffic | x | |
| Allows communication to the control plane machines | x | |
| Allows communication to the machine config server | x |
Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. |
Division of permissions
Starting with OKD 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.
The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.
Isolation between clusters
Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.
Generating an SSH private key and adding it to the agent
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent
and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues.
In a production environment, you require disaster recovery and debugging. |
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' \
-f <path>/<file_name> (1)
1 Specify the path and file name, such as ~/.ssh/id_rsa
, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.ssh
directory.Running this command generates an SSH key that does not require a password in the location that you specified.
If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64
architecture, do not create a key that uses theed25519
algorithm. Instead, create a key that uses thersa
orecdsa
algorithm.Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)"
Example output
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> (1)
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
- When you install OKD, provide the SSH public key to the installation program.
Obtaining the installation program
Before you install OKD, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.Using a pull secret from the Red Hat OpenShift Cluster Manager site is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use
{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.If you do not use the pull secret from the Red Hat OpenShift Cluster Manager site:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
Manually creating the installation configuration file
For installations of a private OKD cluster that are only accessible from an internal network and are not visible to the Internet, you must manually generate your installation configuration file.
Prerequisites
- Obtain the OKD installation program and the access token for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
Customize the following
install-config.yaml
file template and save it in the<installation_directory>
.You must name this configuration file
install-config.yaml
.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.The
install-config.yaml
file is consumed during the next step of the installation process. You must back it up now.
Installation configuration parameters
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
The |
Required configuration parameters
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The API version for the | String |
| The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the | A fully-qualified domain or subdomain name, such as |
| Kubernetes resource | Object |
| The name of the cluster. DNS records for the cluster are all subdomains of | String of lowercase letters, hyphens ( |
| The configuration for the specific platform upon which to perform the installation: | Object |
Network configuration parameters
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Parameter | Description | Values | ||
---|---|---|---|---|
| The configuration for the cluster network. | Object
| ||
| The cluster network provider Container Network Interface (CNI) plug-in to install. | Either | ||
| The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use An IPv4 network. | An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between | ||
| The subnet prefix length to assign to each individual node. For example, if | A subnet prefix. The default value is | ||
| The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
| ||
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
| ||
| Required if you use | An IP network block in CIDR notation. For example,
|
Optional configuration parameters
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||
---|---|---|---|---|
| A PEM-encoded X.509 certificate bundle that is added to the nodes’ trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String | ||
| The configuration for the machines that comprise the compute nodes. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of compute machines, which are also known as worker machines, to provision. | A positive integer greater than or equal to | ||
| The configuration for the machines that comprise the control plane. | Array of | ||
| Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String | ||
| Whether to enable or disable simultaneous multithreading, or
|
| ||
| Required if you use |
| ||
| Required if you use |
| ||
| The number of control plane machines to provision. | The only supported value is | ||
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
|
| ||
| Sources and repositories for the release-image content. | Array of objects. Includes a | ||
| Required if you use | String | ||
| Specify one or more repositories that may also contain the same images. | Array of strings | ||
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
| ||
| The SSH key or keys to authenticate access your cluster machines.
| One or more keys. For example:
|
Additional Azure configuration parameters
Additional Azure configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
| The Azure disk size for the VM. | Integer that represents the size of the disk in GB. The minimum supported disk size is |
| The name of the resource group that contains the DNS zone for your base domain. | String, for example |
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
| The name of the Azure region that hosts your cluster. | Any valid region name, such as |
| List of availability zones to place machines in. For high availability, specify at least two zones. | List of zones, for example |
| The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
| The name of the existing VNet that you want to deploy your cluster to. | String. |
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. | Valid CIDR, for example |
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. | Valid CIDR, for example |
| The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value | Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. |
Sample customized install-config.yaml
file for Azure
You can customize the install-config.yaml
file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your |
apiVersion: v1
baseDomain: example.com (1)
controlPlane: (2)
hyperthreading: Enabled (3) (4)
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024 (5)
type: Standard_D8s_v3
replicas: 3
compute: (2)
- hyperthreading: Enabled (3)
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512 (5)
zones: (6)
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster (1)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
region: centralus (1)
baseDomainResourceGroupName: resource_group (7)
networkResourceGroupName: vnet_resource_group (8)
virtualNetwork: vnet (9)
controlPlaneSubnet: control_plane_subnet (10)
computeSubnet: compute_subnet (11)
outboundType: UserDefinedRouting (12)
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}' (1)
sshKey: ssh-ed25519 AAAA... (14)
publish: Internal (14)
1 | Required. The installation program prompts you for this value. | ||
2 | If you do not provide these parameters and values, the installation program provides the default value. | ||
3 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OKD will support defining multiple compute pools during installation. Only one control plane pool is used. | ||
4 | Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.
| ||
5 | You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. | ||
6 | Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. | ||
7 | Specify the name of the resource group that contains the DNS zone for your base domain. | ||
8 | If you use an existing VNet, specify the name of the resource group that contains it. | ||
9 | If you use an existing VNet, specify its name. | ||
10 | If you use an existing VNet, specify the name of the subnet to host the control plane machines. | ||
11 | If you use an existing VNet, specify the name of the subnet to host the compute machines. | ||
12 | You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. | ||
13 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
| ||
14 | How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the Internet. The default value is External . |
Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
You have an existing
install-config.yaml
file.You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxy
object’sspec.noProxy
field to bypass the proxy if necessary.The
Proxy
objectstatus.noProxy
field is populated with the values of thenetworking.machineNetwork[].cidr
,networking.clusterNetwork[].cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxy
objectstatus.noProxy
field is also populated with the instance metadata endpoint (169.254.169.254
).If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com
,elasticloadbalancing.<region>.amazonaws.com
, ands3.<region>.amazonaws.com
endpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify anhttpProxy
value.2 A proxy URL to use for creating HTTPS connections outside the cluster. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify an httpsProxy
value.3 A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with .
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations.4 If provided, the installation program generates a config map that is named user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in theProxy
object’strustedCA
field. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must provide the MITM CA certificate.The installation program does not support the proxy
readinessEndpoints
field.Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy
object is still created, but it will have a nil spec
.
Only the |
Deploying the cluster
You can install OKD on a compatible cloud platform.
You can run the |
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OKD installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir=<installation_directory> \ (1)
--log-level=info (2)
1 For <installation_directory>
, specify the2 To view different installation details, specify warn
,debug
, orerror
instead ofinfo
.If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
The cluster access and credential information also outputs to
<installation_directory>/.openshift_install.log
when an installation succeeds.The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Installing the OpenShift CLI by downloading the binary
You can install the OpenShift CLI (oc
) to interact with OKD from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.zip
.Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download
oc.tar.gz
.Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
Logging in to the cluster by using the CLI
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.
Prerequisites
You deployed an OKD cluster.
You installed the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 For <installation_directory>
, specify the path to the directory that you stored the installation files in.Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami
Example output
system:admin
Additional resources
- See Accessing the web console for more details about accessing and understanding the OKD web console.
Additional resources
- See About remote health monitoring for more information about the Telemetry service
Next steps
If necessary, you can opt out of remote health reporting.