The Cluster resource

The Cluster resource contains the specification of the cluster itself.

The complete list of keys can be found at the Cluster reference page.

On this page, we will expand on the more important configuration keys.

The documentation for the optional addons can be found on the addons page

api

This object configures how we expose the API:

  • dns will allow direct access to master instances, and configure DNS to point directly to the master nodes.
  • loadBalancer will configure a load balancer in front of the master nodes and configure DNS to point to the it.

DNS example:

  1. spec:
  2. api:
  3. dns: {}

When configuring a LoadBalancer, you can also choose to have a public load balancer or an internal (VPC only) load balancer. The type field should be Public or Internal.

Also, you can add precreated additional security groups to the load balancer by setting additionalSecurityGroups.

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. additionalSecurityGroups:
  6. - sg-xxxxxxxx
  7. - sg-xxxxxxxx

Additionally, you can increase idle timeout of the load balancer by setting its idleTimeoutSeconds. The default idle timeout is 5 minutes, with a maximum of 3600 seconds (60 minutes) being allowed by AWS. Note this value is ignored for load balancer Class Network. For more information see configuring idle timeouts.

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. idleTimeoutSeconds: 300

You can use a valid SSL Certificate for your API Server Load Balancer. Currently, only AWS is supported.

Also, you can change listener’s security policy by sslPolicy. Currently, only AWS Network Load Balancer is supported.

Note that when using sslCertificate, client certificate authentication, such as with the credentials generated via kOps export kubecfg, will not work through the load balancer. As of kOps 1.19, a kubecfg that bypasses the load balancer may be created with the --internal flag to kops update cluster or kOps export kubecfg. Security groups may need to be opened to allow access from the clients to the master instances’ port TCP/443, for example by using the additionalSecurityGroups field on the master instance groups.

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. sslCertificate: arn:aws:acm:<region>:<accountId>:certificate/<uuid>
  6. sslPolicy: ELBSecurityPolicy-TLS-1-2-2017-01

Openstack only As of kOps 1.12.0 it is possible to use the load balancer internally by setting the useForInternalApi: true. This will point masterPublicName to the load balancer.

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Internal
  5. useForInternalApi: true

You can also set the API load balancer to be cross-zone:

  1. spec:
  2. api:
  3. loadBalancer:
  4. crossZoneLoadBalancing: true

Load Balancer Class

AWS only

Introduced
kOps 1.19

You can choose to have either a Network Load Balancer (NLB) or a Classic Load Balancer (CLB). The class field should be either Network (default) or Classic (deprecated).

Note: Changing the class of load balancer in an existing cluster is a disruptive operation for the control plane and the old load balancer must be manually removed. Until the masters have gone through a rolling update, new connections to the apiserver will fail due to the old masters’ TLS certificates containing the old load balancer’s IP addresses.

  1. spec:
  2. api:
  3. loadBalancer:
  4. class : Network
  5. type: Public

Load Balancer Subnet configuration

AWS only

By default, kops will try to choose one suitable subnet per availability zone and use these for the API load balancer. Depending on the type, kops will choose from either Private or Public subnets. If this default logic is not suitable for you (e.g. because you have a more granular separation between subnets), you can explicitly configure the to-be-use subnets:

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. subnets:
  6. - name: subnet-a
  7. - name: subnet-b
  8. - name: subnet-c
  9. ````
  10. It is only allowed to add more subnets and forbidden to remove existing ones. This is due to limitations on AWS
  11. ELBs and NLBs.
  12. If the `type` is `Internal` and the `class` is `Network`, you can also specify a static private IPv4 address per subnet:
  13. ```yaml
  14. spec:
  15. api:
  16. loadBalancer:
  17. type: Internal
  18. subnets:
  19. - name: subnet-a
  20. privateIPv4Address: 172.16.1.10

The specified IPv4 addresses must be part of the subnets CIDR. They can not be changed after initial deployment.

If the type is Public and the class is Network, you can also specify an Elastic IP allocationID to bind a fixed public IP address per subnet. Pleae note only IPv4 addresses have been tested:

  1. spec:
  2. api:
  3. loadBalancer:
  4. type: Public
  5. subnets:
  6. - name: utility-subnet-a
  7. allocationId: eipalloc-222ghi789

The specified Allocation ID’s must already be created manually or external infrastructure as code, eg Terraform. You will need to place the loadBalanacer in the utility subnets for external connectivity.

If you made a mistake or need to change subnets for any other reason, you’re currently forced to manually delete the underlying ELB/NLB and re-run kops update.

etcdClusters

The default etcd configuration

kOps will default to v3 using TLS by default. etcd provisioning and upgrades are handled by etcd-manager. By default, the spec looks like this:

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master0-az0
  4. name: a-1
  5. - instanceGroup: master1-az0
  6. name: a-2
  7. - instanceGroup: master0-az1
  8. name: b-1
  9. name: main
  10. - etcdMembers:
  11. - instanceGroup: master0-az0
  12. name: a-1
  13. - instanceGroup: master1-az0
  14. name: a-2
  15. - instanceGroup: master0-az1
  16. name: b-1
  17. name: events

The etcd version used by kOps follows the recommended etcd version for the given kubernetes version. It is possible to override this by adding the version key to each of the etcd clusters.

By default, the Volumes created for the etcd clusters are gp3 and 20GB each. The volume size, type (gp2, gp3, io1, io2), iops( for io1, io2, gp3) and throughput (gp3) can be configured via their parameters.

As of kOps 1.12.0 it is also possible to modify the requests for your etcd cluster members using the cpuRequest and memoryRequest parameters.

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-us-east-1a
  4. name: a
  5. volumeType: gp3
  6. volumeSize: 20
  7. name: main
  8. - etcdMembers:
  9. - instanceGroup: master-us-east-1a
  10. name: a
  11. volumeType: io1
  12. # WARNING: bear in mind that the Iops to volume size ratio has a maximum of 50 on AWS!
  13. volumeIops: 100
  14. volumeSize: 21
  15. name: events
  16. cpuRequest: 150m
  17. memoryRequest: 512Mi

etcd metrics

Introduced
kOps 1.18

You can expose /metrics endpoint for the etcd instances and control their type (basic or extensive) by defining env vars:

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-us-east-1a
  4. name: a
  5. name: main
  6. manager:
  7. env:
  8. - name: ETCD_LISTEN_METRICS_URLS
  9. value: http://0.0.0.0:8081
  10. - name: ETCD_METRICS
  11. value: basic

Note: If you are running multiple etcd clusters you need to expose the metrics on different ports for each cluster as etcd is running as a service on the master nodes.

etcd backups interval

Introduced
kOps 1.24.1

You can set the interval between backups using the backupInterval parameter:

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-us-east-1a
  4. name: a
  5. name: main
  6. manager:
  7. backupInterval: 1h

etcd backups retention

Introduced
kOps 1.18

As of kOps 1.27, the default etcd backup retention duration is 90 days. You can adjust the retention duration using the backupRetentionDays parameter:

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-us-east-1a
  4. name: a
  5. name: main
  6. manager:
  7. backupRetentionDays: 30

For older kOps versions, you set the retention duration for the hourly and daily backups by defining env vars:

  1. etcdClusters:
  2. - etcdMembers:
  3. - instanceGroup: master-us-east-1a
  4. name: a
  5. name: main
  6. manager:
  7. env:
  8. - name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
  9. value: 7d
  10. - name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
  11. value: 1y

sshAccess

This array configures the CIDRs that are able to ssh into nodes. On AWS this is manifested as inbound security group rules on the nodes and master security groups.

Use this key to restrict cluster access to an office ip address range, for example.

  1. spec:
  2. sshAccess:
  3. - 12.34.56.78/32
Introduced
kOps 1.23

In AWS, instead of listing all CIDRs, it is possible to specify a pre-existing AWS Prefix List ID.

kubernetesApiAccess

This array configures the CIDRs that are able to access the kubernetes API. On AWS this is manifested as inbound security group rules on the ELB or master security groups.

Use this key to restrict cluster access to an office ip address range, for example.

  1. spec:
  2. kubernetesApiAccess:
  3. - 12.34.56.78/32
Introduced
kOps 1.23

In AWS, instead of listing all CIDRs, it is possible to specify a pre-existing AWS Prefix List ID.

cluster.spec Subnet Keys

id

ID of a subnet to share in an existing VPC.

egress

The resource identifier (ID) of something in your existing VPC that you would like to use as “egress” to the outside world.

This feature was originally envisioned to allow re-use of NAT gateways. In this case, the usage is as follows. Although NAT gateways are “public”-facing resources, in the Cluster spec, you must specify them in the private subnet section. One way to think about this is that you are specifying “egress”, which is the default route out from this private subnet.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. egress: nat-987654321
  6. type: Private
  7. zone: us-east-1a
  8. - cidr: 10.20.32.0/21
  9. name: utility-us-east-1a
  10. id: subnet-12345
  11. type: Utility
  12. zone: us-east-1a

In the case that you don’t want to use an existing NAT gateway, but still want to use a pre-allocated elastic IP, kOps 1.19.0 introduced the possibility to specify an elastic IP as egress and kOps will create a NAT gateway that uses it.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. egress: eipalloc-0123456789abcdef0
  6. type: Private
  7. zone: us-east-1a

Specifying an existing AWS Transit gateways is also supported as of kOps 1.20.0:

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. egress: tgw-0123456789abcdef0
  6. type: Private
  7. zone: us-east-1a

In the case that you don’t use NAT gateways or internet gateways, kOps 1.12.0 introduced the “External” flag for egress to force kOps to ignore egress for the subnet. This can be useful when other tools are used to manage egress for the subnet such as virtual private gateways. Please note that your cluster may need to have access to the internet upon creation, so egress must be available upon initializing a cluster. This is intended for use when egress is managed external to kOps, typically with an existing cluster.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. egress: External
  6. type: Private
  7. zone: us-east-1a

publicIP

The IP of an existing EIP that you would like to attach to the NAT gateway.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. publicIP: 203.93.148.142
  6. type: Private
  7. zone: us-east-1a

additionalRoutes

Introduced
kOps 1.24

Add routes in the route tables of the subnet. Targets of routes can be an instance, a peering connection, a NAT gateway, a transit gateway, an internet gateway or an egress-only internet gateway. Currently, only AWS is supported.

  1. spec:
  2. subnets:
  3. - cidr: 10.20.64.0/21
  4. name: us-east-1a
  5. type: Private
  6. zone: us-east-1a
  7. additionalRoutes:
  8. - cidr: 10.21.0.0/16
  9. target: vpc-abcdef

kubeAPIServer

This block contains configuration for the kube-apiserver.

oidc flags for Open ID Connect Tokens

Read more about this here: https://kubernetes.io/docs/admin/authentication/#openid-connect-tokens

  1. spec:
  2. kubeAPIServer:
  3. oidcIssuerURL: https://your-oidc-provider.svc.cluster.local
  4. oidcClientID: kubernetes
  5. oidcUsernameClaim: sub
  6. oidcUsernamePrefix: "oidc:"
  7. oidcGroupsClaim: user_roles
  8. oidcGroupsPrefix: "oidc:"
  9. oidcCAFile: /etc/kubernetes/ssl/kc-ca.pem
  10. oidcRequiredClaim:
  11. - "key=value"

Audit Logging

Read more about this here: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/

Note: As of kOps 1.26, ControlPlane is being used as a role for the master instances. Previously, the master role was Master.

  1. spec:
  2. kubeAPIServer:
  3. auditLogMaxAge: 10
  4. auditLogMaxBackups: 1
  5. auditLogMaxSize: 100
  6. auditLogPath: /var/log/kube-apiserver-audit.log
  7. auditPolicyFile: /etc/kubernetes/audit/policy-config.yaml
  8. fileAssets:
  9. - name: audit-policy-config
  10. path: /etc/kubernetes/audit/policy-config.yaml
  11. roles:
  12. - ControlPlane
  13. content: |
  14. apiVersion: audit.k8s.io/v1
  15. kind: Policy
  16. rules:
  17. - level: Metadata

Note: The auditPolicyFile is needed. If the flag is omitted, no events are logged.

Note: For kOps 1.22-1.24 please use auditPolicyFile: /srv/kubernetes/kube-apiserver/audit/policy-config.yaml due to change in mounted paths.

You could use the fileAssets feature to push an advanced audit policy file on the master nodes.

Example policy file can be found here

Audit Webhook Backend

Webhook backend sends audit events to a remote API, which is assumed to be the same API as kube-apiserver exposes.

Note: As of kOps 1.26, ControlPlane is being used as a role for the master instances. Previously, the master role was Master.

  1. spec:
  2. kubeAPIServer:
  3. auditWebhookBatchMaxWait: 5s
  4. auditWebhookConfigFile: /etc/kubernetes/audit/webhook-config.yaml
  5. fileAssets:
  6. - name: audit-webhook-config
  7. path: /etc/kubernetes/audit/webhook-config.yaml
  8. roles:
  9. - ControlPlane
  10. content: |
  11. apiVersion: v1
  12. kind: Config
  13. clusters:
  14. - name: server
  15. cluster:
  16. server: https://my-webhook-receiver
  17. contexts:
  18. - context:
  19. cluster: server
  20. user: ""
  21. name: default-context
  22. current-context: default-context
  23. preferences: {}
  24. users: []

Note: The audit logging config is also needed. If it is omitted, no events are shipped.

Max Requests Inflight

The maximum number of non-mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 400)

  1. spec:
  2. kubeAPIServer:
  3. maxRequestsInflight: 1000

The maximum number of mutating requests in flight at a given time. When the server exceeds this, it rejects requests. Zero for no limit. (default 200)

  1. spec:
  2. kubeAPIServer:
  3. maxMutatingRequestsInflight: 450

Request Timeout

Introduced
kOps 1.19

The duration a handler must keep a request open before timing it out and can be overridden by other flags for specific types of requests. Note that you must fill empty units of time with zeros. (default 1m0s)

  1. spec:
  2. kubeAPIServer:
  3. requestTimeout: 3m0s

Profiling

Introduced
kOps 1.18

Profiling via web interface host:port/debug/pprof/. (default: true)

  1. spec:
  2. kubeAPIServer:
  3. enableProfiling: false

runtimeConfig

Keys and values here are translated into --runtime-config values for kube-apiserver, separated by commas.

Use this to enable alpha features, for example:

  1. spec:
  2. kubeAPIServer:
  3. runtimeConfig:
  4. batch/v2alpha1: "true"
  5. apps/v1alpha1: "true"

Will result in the flag --runtime-config=batch/v2alpha1=true,apps/v1alpha1=true. Note that kube-apiserver accepts true as a value for switch-like flags.

serviceNodePortRange

This value is passed as --service-node-port-range for kube-apiserver.

  1. spec:
  2. kubeAPIServer:
  3. serviceNodePortRange: 30000-33000

Customize client-ca file

This value is passed as --client-ca-file for kube-apiserver. (default: /srv/kubernetes/ca.crt)

  1. spec:
  2. kubeAPIServer:
  3. clientCAFile: /srv/kubernetes/client-ca.crt

There are certain cases that the user may want to use a customized client CA file other than the default one generated for Kubernetes. In that case, the user can use this flag to specify the client-ca file to use.

To prepare the customized client-ca file on master nodes, the user can either use the fileAssets feature to push an client-ca file, or embed the customized client-ca file in the master AMI.

In the case that the user would use a customized client-ca file, it is common that the kubernetes CA (/srv/kubernetes/ca/crt) need to be appended to the end of the client-ca file. One way to append the ca.crt to the end of the customized client-ca file is to write an kop-hook to do the append logic.

Kops has a CA rotation feature, which refreshes the Kubernetes certificate files, including the ca.crt. If a customized client-ca file is used, when kOps cert rotation happens, the user is responsible for updating the ca.crt in the customized client-ca file. The refresh ca.crt logic can also be achieved by writing a kops hook.

See also Kubernetes certificates

Disable Basic Auth

Support for basic authentication was removed in Kubernetes 1.19. For previous versions of Kubernetes this will disable the passing of the --basic-auth-file flag when:

  1. spec:
  2. kubeAPIServer:
  3. disableBasicAuth: true

targetRamMb

Memory limit for apiserver in MB (used to configure sizes of caches, etc.)

  1. spec:
  2. kubeAPIServer:
  3. targetRamMb: 4096

eventTTL

How long API server retains events. Note that you must fill empty units of time with zeros.

  1. spec:
  2. kubeAPIServer:
  3. eventTTL: 03h0m0s

Taint based Evictions

There are two parameters related to taint based evictions. These parameters indicate default value of the tolerationSeconds for notReady:NoExecute and unreachable:NoExecute.

  1. spec:
  2. kubeAPIServer:
  3. defaultNotReadyTolerationSeconds: 600
  4. defaultUnreachableTolerationSeconds: 600

LogFormat

Choose between log format. Permitted formats: “json”, “text”. Default: “text”.

  1. spec:
  2. kubeAPIServer:
  3. logFormat: json

externalDns

This block contains configuration options for your external-DNS provider.

  1. spec:
  2. externalDns:
  3. watchIngress: true

Default kOps behavior is false. watchIngress: true uses the default dns-controller behavior which is to watch the ingress controller for changes. Set this option at risk of interrupting Service updates in some cases.

The default external-DNS provider is the kOps dns-controller.

You can use external-dns as provider instead by adding the following:

  1. spec:
  2. externalDns:
  3. provider: external-dns

Note that you if you have dns-controller installed, you need to remove this deployment before updating the cluster with the new configuration.

kubelet

This block contains configurations for kubelet. See https://kubernetes.io/docs/admin/kubelet/

NOTE: Where the corresponding configuration value can be empty, fields can be set to empty in the spec, and an empty string will be passed as the configuration value. yaml spec: kubelet: resolvConf: ""

Will result in the flag --resolv-conf= being built.

Disable CPU CFS Quota

To disable CPU CFS quota enforcement for containers that specify CPU limits (default true) we have to set the flag --cpu-cfs-quota to false on all the kubelets. We can specify that in the kubelet spec in our cluster.yml.

  1. spec:
  2. kubelet:
  3. cpuCFSQuota: false

Configure CPU CFS Period

Configure CPU CFS quota period value (cpu.cfs_period_us). Example:

  1. spec:
  2. kubelet:
  3. cpuCFSQuotaPeriod: "100ms"

This change requires CustomCPUCFSQuotaPeriod feature gate.

Enable Custom metrics support

To use custom metrics in kubernetes as per custom metrics doc we have to set the flag --enable-custom-metrics to true on all the kubelets. We can specify that in the kubelet spec in our cluster.yml.

  1. spec:
  2. kubelet:
  3. enableCustomMetrics: true

Setting kubelet CPU management policies

kOps 1.12.0 added support for enabling cpu management policies in kubernetes as per cpu management doc we have to set the flag --cpu-manager-policy to the appropriate value on all the kubelets. This must be specified in the kubelet spec in our cluster.yml.

  1. spec:
  2. kubelet:
  3. cpuManagerPolicy: static

Setting kubelet configurations together with the Amazon VPC backend

Setting kubelet configurations together with the networking Amazon VPC backend requires to also set the cloudProvider: aws setting in this block. Example:

  1. spec:
  2. kubelet:
  3. enableCustomMetrics: true
  4. cloudProvider: aws
  5. ...
  6. ...
  7. cloudProvider: aws
  8. ...
  9. ...
  10. networking:
  11. amazonvpc: {}

Configure a Flex Volume plugin directory

An optional flag can be provided within the KubeletSpec to set a volume plugin directory (must be accessible for read/write operations), which is additionally provided to the Controller Manager and mounted in accordingly.

kOps will set this for you based off the Operating System in use: - ContainerOS: /home/kubernetes/flexvolume/ - Flatcar: /var/lib/kubelet/volumeplugins/ - Default (in-line with upstream k8s): /usr/libexec/kubernetes/kubelet-plugins/volume/exec/

If you wish to override this value, it can be done so with the following addition to the kubelet spec:

  1. spec:
  2. kubelet:
  3. volumePluginDirectory: /provide/a/writable/path/here

Protect Kernel Defaults

IntroducedMinimum K8s Version
kOps 1.18k8s 1.4

Default kubelet behaviour for kernel tuning. If set, kubelet errors if any of kernel tunables is different than kubelet defaults.

  1. spec:
  2. kubelet:
  3. protectKernelDefaults: true

Housekeeping Interval

IntroducedMinimum K8s Version
kOps 1.19k8s 1.2

The interval between container housekeepings defaults to 10s. This can be too small or too high for some use cases and can be modified with the following addition to the kubelet spec.

  1. spec:
  2. kubelet:
  3. housekeepingInterval: 30s

Pod PIDs Limit

IntroducedMinimum K8s Version
kOps 1.22k8s 1.20

podPidsLimit allows to configure the maximum number of pids (process ids) in any pod. Read more in Kubernetes documentation.

  1. spec:
  2. kubelet:
  3. podPidsLimit: 1024

Event QPS

Introduced
kOps 1.19

The limit event creations per second in kubelet. Default value is 0 which means unlimited event creations.

  1. spec:
  2. kubelet:
  3. eventQPS: 0

Event Burst

Introduced
kOps 1.19

Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding EventQPS. Only used if EventQPS > 0.

  1. spec:
  2. kubelet:
  3. eventBurst: 10

LogFormat

Choose between log format. Permitted formats: “json”, “text”. Default: “text”.

  1. spec:
  2. kubelet:
  3. logFormat: json

Graceful Node Shutdown

IntroducedMinimum K8s Version
kOps 1.23k8s 1.21

Graceful node shutdown allows kubelet to prevent instance shutdown until Pods have been safely terminated or a timeout has been reached.

For all CNIs except amazonaws, kOps will try to add a 30 second timeout for 30 seconds where the first 20 seconds is reserved for normal Pods and the last 10 seconds for critical Pods. When using amazonaws this feature is disabled, as it leads to leaking ENIs.

This configuration can be changed as follows:

  1. spec:
  2. kubelet:
  3. shutdownGracePeriod: 60s
  4. shutdownGracePeriodCriticalPods: 20s

Note that Kubelet will fail to install the shutdown inhibtor on systems where logind is configured with an InhibitDelayMaxSeconds lower than shutdownGracePeriod. On Ubuntu, this setting is 30 seconds.

SeccompDefault

SeccompDefault enables the use of RuntimeDefault as the default seccomp profile for all workloads. (Default: false)

Note that a feature gate is required to enable the feature, and the feature is turned on using kubelet config.

  1. spec:
  2. kubelet:
  3. featureGates:
  4. SeccompDefault: "true"
  5. seccompDefault: true

kubeScheduler

This block contains configurations for kube-scheduler. See https://kubernetes.io/docs/admin/kube-scheduler/

  1. spec:
  2. kubeScheduler:
  3. usePolicyConfigMap: true
  4. enableProfiling: false

Will make kube-scheduler use the scheduler policy from configmap “scheduler-policy” in namespace kube-system.

LogFormat

Choose between log format. Permitted formats: “json”, “text”. Default: “text”.

  1. spec:
  2. kubeScheduler:
  3. logFormat: json

kubeDNS

This block contains configurations for CoreDNS.

For Kubernetes version >= 1.20, CoreDNS will be installed as the default DNS server.

  1. spec:
  2. kubeDNS:
  3. provider: CoreDNS

OR

  1. spec:
  2. kubeDNS:

Specifying KubeDNS will install kube-dns as the default service discovery instead of CoreDNS.

  1. spec:
  2. kubeDNS:
  3. provider: KubeDNS

If you are using CoreDNS and want to use an entirely custom CoreFile you can do this by specifying the file. This will not work with any other options which interact with the default CoreFile. You can also override the version of the CoreDNS image used to use a different registry or version by specifying CoreDNSImage.

Note: If you are using this functionality you will need to be extra vigilant on version changes of CoreDNS for changes in functionality of the plugins being used etc.

  1. spec:
  2. kubeDNS:
  3. provider: CoreDNS
  4. coreDNSImage: mirror.registry.local/mirrors/coredns:1.3.1
  5. externalCoreFile: |
  6. amazonaws.com:53 {
  7. errors
  8. log . {
  9. class denial error
  10. }
  11. health :8084
  12. prometheus :9153
  13. proxy . 169.254.169.253 {
  14. }
  15. cache 30
  16. }
  17. .:53 {
  18. errors
  19. health :8080
  20. autopath @kubernetes
  21. kubernetes cluster.local {
  22. pods verified
  23. upstream 169.254.169.253
  24. fallthrough in-addr.arpa ip6.arpa
  25. }
  26. prometheus :9153
  27. proxy . 169.254.169.253
  28. cache 300
  29. }

Note: If you are upgrading to CoreDNS, kube-dns will be left in place and must be removed manually. You can scale the kube-dns and kube-dns-autoscaler deployments in the kube-system namespace to 0 as a starting point, and then remove both deployments. The kube-dns Service itself should be left in place, as this retains the ClusterIP and eliminates the possibility of DNS outages in your cluster.

For larger clusters you may need to set custom resource requests and limits. For the CoreDNS provider you can set

  • memoryLimit
  • cpuRequest
  • memoryRequest

This will override the default limit value for memory of 170Mi and default request values for memory and cpu of 70Mi and 100m.

Example:

  1. kubeDNS:
  2. memoryLimit: 2Gi
  3. cpuRequest: 300m
  4. memoryRequest: 700Mi

kubeControllerManager

This block contains configurations for the controller-manager.

  1. spec:
  2. kubeControllerManager:
  3. horizontalPodAutoscalerSyncPeriod: 15s
  4. horizontalPodAutoscalerDownscaleDelay: 5m0s
  5. horizontalPodAutoscalerDownscaleStabilization: 5m
  6. horizontalPodAutoscalerUpscaleDelay: 3m0s
  7. horizontalPodAutoscalerInitialReadinessDelay: 30s
  8. horizontalPodAutoscalerCpuInitializationPeriod: 5m
  9. horizontalPodAutoscalerTolerance: 0.1
  10. experimentalClusterSigningDuration: 8760h0m0s
  11. enableProfiling: false

For more details on horizontalPodAutoscaler flags see the official HPA docs and the kOps guides on how to set it up.

LogFormat

Choose between log format. Permitted formats: “json”, “text”. Default: “text”.

  1. spec:
  2. kubeControllerManager:
  3. logFormat: json

Feature Gates

Feature gates can be configured on the kubelet.

  1. spec:
  2. kubelet:
  3. featureGates:
  4. Accelerators: "true"
  5. AllowExtTrafficLocalEndpoints: "false"

The above will result in the flag --feature-gates=Accelerators=true,AllowExtTrafficLocalEndpoints=false being added to the kubelet.

Some feature gates also require the featureGates setting on other components. For examplePodShareProcessNamespace requires the feature gate to be enabled also on the api server:

  1. spec:
  2. kubelet:
  3. featureGates:
  4. PodShareProcessNamespace: "true"
  5. kubeAPIServer:
  6. featureGates:
  7. PodShareProcessNamespace: "true"

For more information, see the feature gate documentation

Compute Resources Reservation

In a scenario where node has 32Gi of memory, 16 CPUs and 100Gi of ephemeral storage, resource reservation could be set as in the following example:

  1. spec:
  2. kubelet:
  3. kubeReserved:
  4. cpu: "1"
  5. memory: "2Gi"
  6. ephemeral-storage: "1Gi"
  7. kubeReservedCgroup: "/kube-reserved"
  8. kubeletCgroups: "/kube-reserved"
  9. runtimeCgroups: "/kube-reserved"
  10. systemReserved:
  11. cpu: "500m"
  12. memory: "1Gi"
  13. ephemeral-storage: "1Gi"
  14. systemReservedCgroup: "/system-reserved"
  15. enforceNodeAllocatable: "pods,system-reserved,kube-reserved"

The above will result in the flags --kube-reserved=cpu=1,memory=2Gi,ephemeral-storage=1Gi --kube-reserved-cgroup=/kube-reserved --kubelet-cgroups=/kube-reserved --runtime-cgroups=/kube-reserved --system-reserved=cpu=500m,memory=1Gi,ephemeral-storage=1Gi --system-reserved-cgroup=/system-reserved --enforce-node-allocatable=pods,system-reserved,kube-reserved being added to the kubelet.

Learn more about reserving compute resources here and here.

networkID

On AWS, this is the id of the VPC the cluster is created in. If creating a cluster from scratch, this field does not need to be specified at create time; kops will create a VPC for you.

  1. spec:
  2. networkID: vpc-abcdefg1

More information about running in an existing VPC is here.

hooks

Hooks allow for the execution of an action before the installation of Kubernetes on every node in a cluster. For instance you can install Nvidia drivers for using GPUs. This hooks can be in the form of container images or manifest files (systemd units). Hooks can be placed in either the cluster spec, meaning they will be globally deployed, or they can be placed into the instanceGroup specification. Note: service names on the instanceGroup which overlap with the cluster spec take precedence and ignore the cluster spec definition, i.e. if you have a unit file ‘myunit.service’ in cluster and then one in the instanceGroup, only the instanceGroup is applied.

When creating a systemd unit hook using the manifest field, the hook system will construct a systemd unit file for you. It creates the [Unit] section, adding an automated description and setting Before and Requires values based on the before and requires fields. The value of the manifest field is used as the [Service] section of the unit file. To override this behavior, and instead specify the entire unit file yourself, you may specify useRawManifest: true. In this case, the contents of the manifest field will be used as a systemd unit, unmodified. The before and requires fields may not be used together with useRawManifest.

  1. spec:
  2. # many sections removed
  3. # run a container as a hook
  4. hooks:
  5. - before:
  6. - some_service.service
  7. requires:
  8. - docker.service
  9. execContainer:
  10. image: kopeio/nvidia-bootstrap:1.6
  11. # these are added as -e to the docker environment
  12. environment:
  13. AWS_REGION: eu-west-1
  14. SOME_VAR: SOME_VALUE
  15. # or construct a systemd unit
  16. hooks:
  17. - name: iptable-restore.service
  18. roles:
  19. - Node
  20. - Master
  21. before:
  22. - kubelet.service
  23. manifest: |
  24. EnvironmentFile=/etc/environment
  25. # do some stuff
  26. # or use a raw systemd unit
  27. hooks:
  28. - name: iptable-restore.service
  29. roles:
  30. - Node
  31. - Master
  32. useRawManifest: true
  33. manifest: |
  34. [Unit]
  35. Description=Restore iptables rules
  36. Before=kubelet.service
  37. [Service]
  38. EnvironmentFile=/etc/environment
  39. # do some stuff
  40. # or disable a systemd unit
  41. hooks:
  42. - name: update-engine.service
  43. disabled: true
  44. # or you could wrap this into a full unit
  45. hooks:
  46. - name: disable-update-engine.service
  47. before:
  48. - update-engine.service
  49. manifest: |
  50. Type=oneshot
  51. ExecStart=/usr/bin/systemctl stop update-engine.service

Install Ceph

  1. spec:
  2. # many sections removed
  3. hooks:
  4. - execContainer:
  5. command:
  6. - sh
  7. - -c
  8. - chroot /rootfs apt-get update && chroot /rootfs apt-get install -y ceph-common
  9. image: busybox

Install cachefilesd

  1. spec:
  2. # many sections removed
  3. hooks:
  4. - before:
  5. - kubelet.service
  6. manifest: |
  7. Type=oneshot
  8. ExecStart=/sbin/modprobe cachefiles
  9. name: cachefiles.service
  10. - execContainer:
  11. command:
  12. - sh
  13. - -c
  14. - chroot /rootfs apt-get update && chroot /rootfs apt-get install -y cachefilesd
  15. && chroot /rootfs sed -i s/#RUN/RUN/ /etc/default/cachefilesd && chroot /rootfs
  16. service cachefilesd restart
  17. image: busybox

fileAssets

FileAssets permit you to place inline file content into the Cluster and Instance Group specifications. This is useful for deploying additional files that Kubernetes components require, such as audit logging or admission controller configurations.

  1. spec:
  2. fileAssets:
  3. - name: iptable-restore
  4. # Note if path is not specified, the default is /srv/kubernetes/assets/<name>
  5. path: /var/lib/iptables/rules-save
  6. # Note if roles are not specified, the default is all roles. As of kOps 1.26, ControlPlane is being used as a role for the master instances. Previously, the master role was Master.
  7. roles: [ControlPlane,Node,Bastion] # a list of roles to apply the asset to
  8. content: |
  9. some file content

mode

Introduced
kOps 1.24

Optionally, mode allows you to specify a file’s mode and permission bits.

NOTE: If not specified, the default is "0440", which matches the behaviour of older versions of kOps.

  1. spec:
  2. fileAssets:
  3. - name: my-script
  4. path: /usr/local/bin/my-script
  5. mode: "0550"
  6. content: |
  7. #! /usr/bin/env bash
  8. ...

cloudConfig

disableSecurityGroupIngress

If you are using aws as cloudProvider, you can disable authorization of ELB security group to Kubernetes Nodes security group. In other words, it will not add security group rule. This can be useful to avoid AWS limit: 50 rules per security group.

  1. spec:
  2. cloudConfig:
  3. disableSecurityGroupIngress: true

elbSecurityGroup

To avoid creating a security group per elb, you can specify security group id, that will be assigned to your LoadBalancer. It must be security group id, not name. api.loadBalancer.additionalSecurityGroups must be empty, because Kubernetes will add rules per ports that are specified in service file. This can be useful to avoid AWS limits: 500 security groups per region and 50 rules per security group.

  1. spec:
  2. cloudConfig:
  3. elbSecurityGroup: sg-123445678

manageStorageClasses

Introduced
kOps 1.20

By default kops will create StorageClass resources with some opinionated settings specific to cloud provider on which the cluster is installed. One of those storage classes will be defined as default applying the annotation storageclass.kubernetes.io/is-default-class: "true". This may not always be a desirable behaviour and some cluster admins rather prefer to have more control of storage classes and manage them outside of kops. When set to false, kOps will no longer create any StorageClass objects. Any such objects that kOps created in the past are left as is, and kOps will no longer reconcile them against future changes.

The existing spec.cloudConfig.openstack.blockStorage.createStorageClass field remains in place. However, if both that and the new spec.cloudConfig.manageStorageClasses field are populated, they must agree: It is invalid both to disable management of StorageClass objects globally but to enable them for OpenStack and, conversely, to enable management globally but disable it for OpenStack.

  1. spec:
  2. cloudConfig:
  3. manageStorageClasses: false

containerRuntime

IntroducedMinimum K8s Version
kOps 1.18k8s 1.11

As of Kubernetes 1.20, the default container runtime is containerd. Previously, the default container runtime was Docker.

Docker can still be used as container runtime with Kubernetes 1.20+, but be aware that Kubernetes is deprecating support for it and will be removed in Kubernetes 1.22.

  1. spec:
  2. containerRuntime: containerd

containerd

Configuration

It is possible to override the containerd daemon options for all the nodes in the cluster. See the API docs for the full list of options. Overriding the configuration of containerd has to be done with care as the default config may change with new releases and can lead to incompatibilities.

  1. spec:
  2. containerd:
  3. version: 1.4.4
  4. logLevel: info
  5. configOverride: ""

Custom Packages

kOps uses the .tar.gz packages for installing containerd on any supported OS. This makes it easy to use a custom build or pre-release packages, by specifying its URL and sha256:

  1. spec:
  2. containerd:
  3. packages:
  4. urlAmd64: https://github.com/containerd/containerd/releases/download/v1.4.4/cri-containerd-cni-1.4.4-linux-amd64.tar.gz
  5. hashAmd64: 96641849cb78a0a119223a427dfdc1ade88412ef791a14193212c8c8e29d447b

The format of the custom package must be identical to the official packages:

  1. tar tf cri-containerd-cni-1.4.4-linux-amd64.tar.gz
  2. usr/local/bin/containerd
  3. usr/local/bin/containerd-shim
  4. usr/local/bin/containerd-shim-runc-v1
  5. usr/local/bin/containerd-shim-runc-v2
  6. usr/local/bin/crictl
  7. usr/local/bin/critest
  8. usr/local/bin/ctr
  9. usr/local/sbin/runc

Runc Version and Packages

Introduced
kOps 1.24.2

kOps uses the binaries from https://github.com/opencontainers/runc for installing runc on any supported OS. This makes it easy to specify the desired release version:

  1. spec:
  2. containerd:
  3. runc:
  4. version: 1.1.2

It also makes it possible to use a newer version than the kOps binary, pre-release packages, or even a custom build, by specifying its URL and sha256:

  1. spec:
  2. containerd:
  3. runc:
  4. version: 1.100.0
  5. packages:
  6. urlAmd64: https://cdn.example.com/k8s/runc/releases/download/v1.100.0/runc.amd64
  7. hashAmd64: ab1c67fbcbdddbe481e48a55cf0ef9a86b38b166b5079e0010737fd87d7454bb

Registry Mirrors

Introduced
kOps 1.19

If you have many instances running, each time one of them pulls an image that is not present on the host, it will fetch it from the internet. By caching these images, you can keep the traffic within your local network and avoid egress bandwidth usage.

See Image Registry docs for more info.

  1. spec:
  2. containerd:
  3. registryMirrors:
  4. docker.io:
  5. - https://registry-1.docker.io
  6. "*":
  7. - http://HostIP2:Port2

NRI configuration

Using kOps, you can activate the Node Resource Interface (NRI) feature in containerd. It’s important to have a at least containerd version of 1.7.0 or later. The available NRI parameters for containerd in kOps include: enabled, pluginRegistrationTimeout and pluginRequestTimeout. By default, NRI options are unset in kOps, which means we rely on containerd’s default behavior (i.e., disabled).

  1. spec:
  2. containerd:
  3. version: 1.7.0
  4. nri:
  5. # Enable NRI support in containerd.
  6. enabled: true
  7. # pluginRegistrationTimeout is the timeout for a plugin to register after connection.
  8. pluginRegistrationTimeout: "5s"
  9. # pluginRequestTimeout is the timeout for a plugin to handle an event/request.
  10. pluginRequestTimeout: "2s"

If you have NRI disabled (i.e., nri.enabled = false), please note that settings for pluginRegistrationTimeout, and pluginRequestTimeout won’t take effect. These settings are only applicable when NRI is enabled. It is valid configuration to enable NRI without specifying custom values for pluginRegistrationTimeout, and pluginRequestTimeout, as these fields will inherit their default values from containerd. If you need to configure additional NRI parameters, you can do so by providing your complete containerd configuration using configOverride.

sshKeyName

In some cases, it may be desirable to use an existing AWS SSH key instead of allowing kOps to create a new one. Providing the name of a key already in AWS is an alternative to --ssh-public-key.

  1. spec:
  2. sshKeyName: myexistingkey

If you want to create your instance without any SSH keys you can set this to an empty string:

  1. spec:
  2. sshKeyName: ""

useHostCertificates

Self-signed certificates towards Cloud APIs. In some cases Cloud APIs do have self-signed certificates.

  1. spec:
  2. useHostCertificates: true

Optional step: add root certificates to instancegroups root ca bundle

  1. additionalUserData:
  2. - name: cacert.sh
  3. type: text/x-shellscript
  4. content: |
  5. #!/bin/sh
  6. cat > /usr/local/share/ca-certificates/mycert.crt <<EOF
  7. -----BEGIN CERTIFICATE-----
  8. snip
  9. -----END CERTIFICATE-----
  10. EOF
  11. update-ca-certificates

NOTE: update-ca-certificates is command for debian/ubuntu. That command is different depending your OS.

target

In some use-cases you may wish to augment the target output with extra options. target supports a minimal amount of options you can do this with. Currently only the terraform target supports this, but if other use cases present themselves, kOps may eventually support more.

  1. spec:
  2. target:
  3. terraform:
  4. providerExtraConfig:
  5. alias: foo

assets

Assets define alternative locations from where to retrieve static files and containers

containerRegistry

The container registry enables kOps / kubernetes to pull containers from a managed registry. This is useful when pulling containers from the internet is not an option, eg. because the deployment is offline / internet restricted or because of special requirements that apply for deployed artifacts, eg. auditing of containers.

For a use case example, see How to use kOps in AWS China Region

  1. spec:
  2. assets:
  3. containerRegistry: example.com/registry

containerProxy

The container proxy is designed to acts as a pull through cache for docker container assets. Basically, what it does is it remaps the Kubernetes image URL to point to your cache so that the docker daemon will pull the image from that location. If, for example, the containerProxy is set to proxy.example.com, the image k8s.gcr.io/kube-apiserver will be pulled from proxy.example.com/kube-apiserver instead. Note that the proxy you use has to support this feature for private registries.

  1. spec:
  2. assets:
  3. containerProxy: proxy.example.com

sysctlParameters

Introduced
kOps 1.17

To add custom kernel runtime parameters to your all instance groups in the cluster, specify the sysctlParameters field as an array of strings. Each string must take the form of variable=value the way it would appear in sysctl.conf (see also sysctl(8) manpage).

You could also use the sysctlParameters field on the instance group to specify different parameters for each instance group.

Unlike a simple file asset, specifying kernel runtime parameters in this manner would correctly invoke sysctl --system automatically for you to apply said parameters.

For example:

  1. spec:
  2. sysctlParameters:
  3. - fs.pipe-user-pages-soft=524288
  4. - net.ipv4.tcp_keepalive_time=200

which would end up in a drop-in file on all masters and nodes of the cluster.

cgroupDriver

As of Kubernetes 1.20, kOps will default the cgroup driver of the kubelet and the container runtime to use systemd as the default cgroup driver as opposed to cgroup fs.

It is important to ensure that the kubelet and the container runtime are using the same cgroup driver. Below are examples showing how to set the cgroup driver for kubelet and the container runtime.

Setting kubelet to use cgroupfs

  1. spec:
  2. kubelet:
  3. cgroupDriver: cgroupfs

Setting Docker to use cgroupfs

  1. spec:
  2. docker:
  3. execOpt:
  4. - native.cgroupdriver=cgroupfs

In the case of containerd, the cgroup-driver is dependent on the cgroup driver of kubelet. To use cgroupfs, just update the cgroupDriver of kubelet to use cgroupfs.

NTP

The installation and the configuration of NTP can be skipped by setting managed to false.

  1. spec:
  2. ntp:
  3. managed: false

Service Account Issuer Discovery and AWS IAM Roles for Service Accounts (IRSA)

Introduced
kOps 1.21

Warning: Enabling the following configuration on an existing cluster can be disruptive due to the control plane provisioning tokens with different issuers. The symptom is that Pods are unable to authenticate to the Kubernetes API. To resolve this, delete Service Account token secrets that exists in the cluster and kill all pods unable to authenticate.

Note: You can follow a variation of the procedure documented here to enable IRSA on an existing cluster without disruption.

kOps can publish the Kubernetes service account token issuer and configure AWS to trust it to authenticate Kubernetes service accounts:

  1. spec:
  2. serviceAccountIssuerDiscovery:
  3. discoveryStore: s3://publicly-readable-store
  4. enableAWSOIDCProvider: true

The discoveryStore option causes kOps to publish an OIDC-compatible discovery document to a path in an object storage bucket (such as S3 or GCS). This would ordinarily be a different bucket than the state store. kOps will automatically configure spec.kubeAPIServer.serviceAccountIssuer and default spec.kubeAPIServer.serviceAccountJWKSURI to the corresponding HTTPS URL.

The enableAWSOIDCProvider configures AWS to trust the service account issuer to authenticate service accounts for IAM Roles for Service Accounts (IRSA). In order for this to work, the service account issuer discovery URL must be publicly readable.

IAM roles for addons

Most kOps addons that interact with the AWS API can use dedicated IAM roles. To enable this, add the following:

  1. spec:
  2. iam:
  3. useServiceAccountExternalPermissions: true

IAM roles for user-managed ServiceAccounts

kOps can provision AWS permissions for use by arbitrary service accounts:

  1. spec:
  2. iam:
  3. serviceAccountExternalPermissions:
  4. - name: someServiceAccount
  5. namespace: someNamespace
  6. aws:
  7. policyARNs:
  8. - arn:aws:iam::000000000000:policy/somePolicy
  9. - name: anotherServiceAccount
  10. namespace: anotherNamespace
  11. aws:
  12. inlinePolicy: |-
  13. [
  14. {
  15. "Effect": "Allow",
  16. "Action": "s3:ListAllMyBuckets",
  17. "Resource": "*"
  18. }
  19. ]

To configure Pods to assume the given IAM roles, enable the Pod Identity Webhook. Without this webhook, you need to modify your Pod specs yourself for your Pod to assume the defined roles.

API Changes

kOps is working on updating the v1alpha2 API to a newer version. That new API is still under development, but the internal form of the API and validation error messages use the new field names. The following table tracks the changes, excepting the removal of fields no longer in use.

v1alpha2 FieldNew Field
additionalNetworkCIDRsnetworking.additionalNetworkCIDRs
additionalSansapi.additionalSANs
api.loadBalancer.subnets.allocationIdapi.loadBalancer.subnets.allocationID
api.loadBalancer.useForInternalApiapi.loadBalancer.useForInternalAPI
awsLoadBalancerControllercloudProvider.aws.loadBalancerController
cloudConfig.awsEBSCSIDrivercloudProvider.aws.ebsCSIDriver
cloudConfig.azurecloudProvider.azure
cloudConfig.azure.subscriptionIdcloudProvider.azure.subscriptionID
cloudConfig.azure.tenantIdcloudProvider.azure.tenantID
cloudConfig.gcpPDCSIDrivercloudProvider.gce.pdCSIDriver
cloudConfig.disableSecurityGroupIngresscloudProvider.aws.disableSecurityGroupIngress
cloudConfig.elbSecurityGroupcloudProvider.aws.elbSecurityGroup
cloudConfig.gceServiceAccountcloudProvider.gce.serviceAccount
cloudConfig.nodeIPFamiliescloudProvider.aws.nodeIPFamilies
cloudConfig.openstackcloudProvider.openstack
cloudConfig.spotinstOrientationcloudProvider.aws.spotinstOrientation
cloudConfig.spotinstProductcloudProvider.aws.spotinstProduct
cloudProvider (string)cloudProvider (map)
configBaseconfigStore.base
DisableSubnetTagstagSubnets (value inverted)
egressProxynetworking.egressProxy
etcdClusters[].etcdMembers[].kmsKeyIdetcdClusters[].etcdMembers[].kmsKeyID
etcdClusters[].etcdMembers[].volumeIopsetcdClusters[].etcdMembers[].volumeIOPS
externalDnsexternalDNS
externalDns.disable: trueexternalDNS.provider: none
hooks[].disabledhooks[].enabled (value inverted)
isolateMastersnetworking.isolateControlPlane
keyStoreconfigStore.keypairs
kubeAPIServer.authorizationRbacSuperUserkubeAPIServer.authorizationRBACSuperUser
kubeAPIServer.authorizationWebhookCacheAuthorizedTtlkubeAPIServer.authorizationWebhookCacheAuthorizedTTL
kubeAPIServer.authorizationWebhookCacheUnauthorizedTtlkubeAPIServer.authorizationWebhookCacheUnauthorizedTTL
kubeAPIServer.etcdCaFilekubeAPIServer.etcdCAFile
kubeAPIServer.oidcClientIDauthentication.oidc.clientID
kubeAPIServer.oidcGroupsPrefixauthentication.oidc.groupsPrefix
kubeAPIServer.oidcIssuerURLauthentication.oidc.issuerURL
kubeAPIServer.oidcRequiredClaim (list)authentication.oidc.oidcRequiredClaims (map)
kubeAPIServer.oidcUsernameClaimauthentication.oidc.usernameClaim
kubeAPIServer.oidcUsernamePrefixauthentication.oidc.usernamePrefix
kubeAPIServer.targetRamMbkubeAPIServer.targetRamMB
kubeControllerManager.concurrentRcSyncskubeControllerManager.concurrentRCSyncs
kubelet.authenticationTokenWebhookCacheTtlkubelet.authenticationTokenWebhookCacheTTL
kubelet.clientCaFilekubelet.clientCAFile
kubeProxy.ipvsExcludeCidrskubeProxy.ipvsExcludeCIDRs
kubernetesApiAccessapi.access
masterKubeletcontrolPlaneKubelet
masterKubelet.authenticationTokenWebhookCacheTtlcontrolPlaneKubelet.authenticationTokenWebhookCacheTTL
masterKubelet.clientCaFilecontrolPlaneKubelet.clientCAFile
masterPublicNameapi.publicName
networkCIDRnetworking.networkCIDR
networkIDnetworking.networkID
networking.amazonvpcnetworking.amazonVPC
networking.amazonvpc.imageNamenetworking.amazonVPC.image
networking.amazonvpc.initImageNamenetworking.amazonVPC.initImage
networking.canal.disableFlannelForwardRulesnetworking.canal.flanneldIptablesForwardRules (value inverted)
networking.cilium.disableMasqueradenetworking.cilium.masquerade (value inverted)
networking.cilium.IPTablesRulesNoinstallnetworking.cilium.installIptablesRules (value inverted)
networking.cilium.toFqdnsDnsRejectResponseCodenetworking.cilium.toFQDNsDNSRejectResponseCode
networking.cilium.toFqdnsEnablePollernetworking.cilium.toFQDNsEnablePoller
networking.gcenetworking.gcp
networking.kuberouternetworking.kubeRouter
nodeTerminationHandlercloudProvider.aws.nodeTerminationHandler
nonMasqueradeCIDRnetworking.nonMasqueradeCIDR
podCIDRnetworking.podCIDR
podIdentityWebhookcloudProvider.aws.podIdentityWebhook
projectcloudProvider.gce.project
secretStoreconfigStore.secrets
serviceClusterIPRangenetworking.serviceClusterIPRange
subnetsnetworking.subnets
tagSubnetsnetworking.tagSubnets
topologynetworking.topology
topology.bastion.bastionPublicNamenetworking.topology.bastion.publicName
topology.dns.typenetworking.topology.dns
warmPoolcloudProvider.aws.warmPool