Working with Kyverno

Kyverno, a Cloud Native Computing Foundation project, is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations using admission controls and background scans. Kyverno policies are Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like kubectl, kustomize, and Git.

This document gives an example to demonstrate how to use Kyverno to manage policies across multiple clusters.

Setup Karmada

To start up Karmada, you can refer to here. If you just want to try Karmada, we recommend building a development environment by hack/local-up-karmada.sh.

  1. git clone https://github.com/karmada-io/karmada
  2. cd karmada
  3. hack/local-up-karmada.sh

Kyverno Installations

In this case, we will use Kyverno v1.6.3. Related deployment files are from here.

Working with Kyverno - 图1note

You can choose the version of Kyverno based on that of the cluster where Karmada is installed. See details here. However, Kyverno 1.7.x removes the kubeconfig parameter and does not support out-of-cluster installations. So Kyverno 1.7.x is not able to run with Karmada.

Install Kyverno APIs on Karmada

  1. Switch to Karmada control plane.
  1. kubectl config use-context karmada-apiserver
  1. Create resource objects of Kyverno in Karmada control plane. The content is as follows and does not need to be modified.

Deploy namespace: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L1-L12

Deploy configmap: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L7751-L7783

Deploy Kyverno CRDs: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L12-L7291

Install Kyverno components on host cluster

  1. Switch to karmada-host context.
  1. kubectl config use-context karmada-host
  1. Create resource objects of Kyverno in karmada-host context, the content is as follows.

Deploy namespace: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L1-L12

Deploy RBAC resources: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L7292-L7750

Deploy Karmada kubeconfig in Kyverno namespace. Fill in the secret which represents kubeconfig pointing to karmada-apiserver, such as ca_crt, client_cer and client_key below.

  1. apiVersion: v1
  2. stringData:
  3. kubeconfig: |-
  4. apiVersion: v1
  5. clusters:
  6. - cluster:
  7. certificate-authority-data: {{ca_crt}}
  8. server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
  9. name: kind-karmada
  10. contexts:
  11. - context:
  12. cluster: kind-karmada
  13. user: kind-karmada
  14. name: karmada
  15. current-context: karmada
  16. kind: Config
  17. preferences: {}
  18. users:
  19. - name: kind-karmada
  20. user:
  21. client-certificate-data: {{client_cer}}
  22. client-key-data: {{client_key}}
  23. kind: Secret
  24. metadata:
  25. name: kubeconfig
  26. namespace: kyverno

Deploy Kyverno controllers and services:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. labels:
  5. app: kyverno
  6. app.kubernetes.io/component: kyverno
  7. app.kubernetes.io/instance: kyverno
  8. app.kubernetes.io/name: kyverno
  9. app.kubernetes.io/part-of: kyverno
  10. app.kubernetes.io/version: v1.6.3
  11. name: kyverno-svc
  12. namespace: kyverno
  13. spec:
  14. type: NodePort
  15. ports:
  16. - name: https
  17. port: 443
  18. targetPort: https
  19. nodePort: {{nodePort}}
  20. selector:
  21. app: kyverno
  22. app.kubernetes.io/name: kyverno
  23. ---
  24. apiVersion: v1
  25. kind: Service
  26. metadata:
  27. labels:
  28. app: kyverno
  29. app.kubernetes.io/component: kyverno
  30. app.kubernetes.io/instance: kyverno
  31. app.kubernetes.io/name: kyverno
  32. app.kubernetes.io/part-of: kyverno
  33. app.kubernetes.io/version: v1.6.3
  34. name: kyverno-svc-metrics
  35. namespace: kyverno
  36. spec:
  37. ports:
  38. - name: metrics-port
  39. port: 8000
  40. targetPort: metrics-port
  41. selector:
  42. app: kyverno
  43. app.kubernetes.io/name: kyverno
  44. ---
  45. apiVersion: apps/v1
  46. kind: Deployment
  47. metadata:
  48. labels:
  49. app: kyverno
  50. app.kubernetes.io/component: kyverno
  51. app.kubernetes.io/instance: kyverno
  52. app.kubernetes.io/name: kyverno
  53. app.kubernetes.io/part-of: kyverno
  54. app.kubernetes.io/version: v1.6.3
  55. name: kyverno
  56. namespace: kyverno
  57. spec:
  58. replicas: 1
  59. selector:
  60. matchLabels:
  61. app: kyverno
  62. app.kubernetes.io/name: kyverno
  63. strategy:
  64. rollingUpdate:
  65. maxSurge: 1
  66. maxUnavailable: 40%
  67. type: RollingUpdate
  68. template:
  69. metadata:
  70. labels:
  71. app: kyverno
  72. app.kubernetes.io/component: kyverno
  73. app.kubernetes.io/instance: kyverno
  74. app.kubernetes.io/name: kyverno
  75. app.kubernetes.io/part-of: kyverno
  76. app.kubernetes.io/version: v1.6.3
  77. spec:
  78. affinity:
  79. podAntiAffinity:
  80. preferredDuringSchedulingIgnoredDuringExecution:
  81. - podAffinityTerm:
  82. labelSelector:
  83. matchExpressions:
  84. - key: app.kubernetes.io/name
  85. operator: In
  86. values:
  87. - kyverno
  88. topologyKey: kubernetes.io/hostname
  89. weight: 1
  90. containers:
  91. - args:
  92. - --filterK8sResources=[Event,*,*][*,kube-system,*][*,kube-public,*][*,kube-node-lease,*][Node,*,*][APIService,*,*][TokenReview,*,*][SubjectAccessReview,*,*][*,kyverno,kyverno*][Binding,*,*][ReplicaSet,*,*][ReportChangeRequest,*,*][ClusterReportChangeRequest,*,*][PolicyReport,*,*][ClusterPolicyReport,*,*]
  93. - -v=2
  94. - --kubeconfig=/etc/kubeconfig
  95. - --serverIP={{nodeIP}}:{{nodePort}}
  96. env:
  97. - name: INIT_CONFIG
  98. value: kyverno
  99. - name: METRICS_CONFIG
  100. value: kyverno-metrics
  101. - name: KYVERNO_NAMESPACE
  102. valueFrom:
  103. fieldRef:
  104. fieldPath: metadata.namespace
  105. - name: KYVERNO_SVC
  106. value: kyverno-svc
  107. - name: TUF_ROOT
  108. value: /.sigstore
  109. image: ghcr.io/kyverno/kyverno:v1.6.3
  110. imagePullPolicy: Always
  111. livenessProbe:
  112. failureThreshold: 2
  113. httpGet:
  114. path: /health/liveness
  115. port: 9443
  116. scheme: HTTPS
  117. initialDelaySeconds: 15
  118. periodSeconds: 30
  119. successThreshold: 1
  120. timeoutSeconds: 5
  121. name: kyverno
  122. ports:
  123. - containerPort: 9443
  124. name: https
  125. protocol: TCP
  126. - containerPort: 8000
  127. name: metrics-port
  128. protocol: TCP
  129. readinessProbe:
  130. failureThreshold: 4
  131. httpGet:
  132. path: /health/readiness
  133. port: 9443
  134. scheme: HTTPS
  135. initialDelaySeconds: 5
  136. periodSeconds: 10
  137. successThreshold: 1
  138. timeoutSeconds: 5
  139. resources:
  140. limits:
  141. memory: 384Mi
  142. requests:
  143. cpu: 100m
  144. memory: 128Mi
  145. securityContext:
  146. allowPrivilegeEscalation: false
  147. capabilities:
  148. drop:
  149. - ALL
  150. privileged: false
  151. readOnlyRootFilesystem: true
  152. runAsNonRoot: true
  153. volumeMounts:
  154. - mountPath: /.sigstore
  155. name: sigstore
  156. - mountPath: /etc/kubeconfig
  157. name: kubeconfig
  158. subPath: kubeconfig
  159. initContainers:
  160. - env:
  161. - name: METRICS_CONFIG
  162. value: kyverno-metrics
  163. - name: KYVERNO_NAMESPACE
  164. valueFrom:
  165. fieldRef:
  166. fieldPath: metadata.namespace
  167. image: ghcr.io/kyverno/kyvernopre:v1.6.3
  168. imagePullPolicy: Always
  169. name: kyverno-pre
  170. resources:
  171. limits:
  172. cpu: 100m
  173. memory: 256Mi
  174. requests:
  175. cpu: 10m
  176. memory: 64Mi
  177. securityContext:
  178. allowPrivilegeEscalation: false
  179. capabilities:
  180. drop:
  181. - ALL
  182. privileged: false
  183. readOnlyRootFilesystem: true
  184. runAsNonRoot: true
  185. securityContext:
  186. runAsNonRoot: true
  187. serviceAccountName: kyverno-service-account
  188. volumes:
  189. - emptyDir: {}
  190. name: sigstore
  191. - name: kubeconfig
  192. secret:
  193. defaultMode: 420
  194. secretName: kubeconfig
  195. ---

For multi-cluster deployment, we need to add the config of --serverIP which is the address of the webhook server. So you need to ensure that the network from nodes in Karmada control plane to those in karmada-host cluster is connected and expose Kyverno controller pods to control plane, for example, using nodePort above.

Run demo

Create require-labels ClusterPolicy

ClusterPolicy is a CRD which Kyverno offers to support different kinds of rules. Here is an example ClusterPolicy which means that you must create pod with app.kubernetes.io/name label. You can use the following commands to create it in Karmada control plane.

  1. kubectl config use-context karmada-apiserver
  1. kubectl create -f- << EOF
  2. apiVersion: kyverno.io/v1
  3. kind: ClusterPolicy
  4. metadata:
  5. name: require-labels
  6. spec:
  7. validationFailureAction: enforce
  8. rules:
  9. - name: check-for-labels
  10. match:
  11. any:
  12. - resources:
  13. kinds:
  14. - Pod
  15. validate:
  16. message: "label 'app.kubernetes.io/name' is required"
  17. pattern:
  18. metadata:
  19. labels:
  20. app.kubernetes.io/name: "?*"
  21. EOF

The output is similar to:

  1. clusterpolicy.kyverno.io/require-labels created

Create a bad deployment without labels

  1. kubectl create deployment nginx --image=nginx

The output is similar to:

  1. error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request:
  2. policy Deployment/default/nginx for resource violation:
  3. require-labels:
  4. autogen-check-for-labels: 'validation error: label ''app.kubernetes.io/name'' is
  5. required. rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/'

Reference