Resource Adoption

There are times that you might want to use KubeVela application to adopt existing resources or from other sources like Helm release. In this case, you can leverage the capability of resource adoption in KubeVela.

By default, when KubeVela application tries to dispatch (create or update) one resource, it will first check if this resource belongs to itself. This check is done by comparing the label values of app.oam.dev/name & app.oam.dev/namespace and see whether they are equal to the application’s name & namespace.

If this resource does not belongs to the application itself (belongs to no one or some other application), the application will stop the dispatch operation and report an error. This mechanism is designed to prevent unintended edits to resources managed by other operators or systems.

If the resource is currently managed by other applications, you can refer to shared-resource policy and read more about sharing resources across multiple applications.

If the resource is managed by no one, to allow KubeVela application to manage the resource, you can leverage the read-only policy or take-over policy to enforce resource adoption on these resources.

With read-only policy, you can select resources that could be adopted by the current application. For example, in the below application, Deployment typed resources are treated as read-only resources and are able to be adopted by the given application.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: read-only
  5. spec:
  6. components:
  7. - name: nginx
  8. type: webservice
  9. properties:
  10. image: nginx
  11. policies:
  12. - type: read-only
  13. name: read-only
  14. properties:
  15. rules:
  16. - selector:
  17. resourceTypes: ["Deployment"]

The read-only policy allows application to read the selected resources but will skip all edits to the target resource. Error will be reported if the target resource does not exist.

The target resource will NOT be attached with the application’s label. It is possible for multiple applications to use the same resource with read-only policy concurrently. The deletion of the application will also skip the recycle process of the target resources.

Although the resources selected in the read-only policy will not be editable through application, both health check and resource topology graph can work normally. Therefore, you can use KubeVela application with read-only policy to build “monitoring group” for underlying resources and leverage tools such as vela top or velaux to observe them, without any modification.

practice

  1. First creat the nginx deployment.
  1. kubectl create deploy nginx --image=nginx
  1. Deploy the application with read-only policy.
  1. cat <<EOF | vela up -f -
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: read-only
  6. spec:
  7. components:
  8. - name: nginx
  9. type: webservice
  10. properties:
  11. image: nginx
  12. policies:
  13. - type: read-only
  14. name: read-only
  15. properties:
  16. rules:
  17. - selector:
  18. resourceTypes: ["Deployment"]
  19. EOF
  1. Check the running status of the application.
  1. vela status read-only
  1. Use vela top to see the resource topology of the application. read-only-vela-top

  2. Use velaux to see the resource topology graph of the application. read-only-velaux

In the case you not only want KubeVela application to observe underlying resource but also want the application to be able to edit them, you can use the take-over policy in replace of the read-only policy.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: take-over
  5. spec:
  6. components:
  7. - name: nginx-take-over
  8. type: k8s-objects
  9. properties:
  10. objects:
  11. - apiVersion: apps/v1
  12. kind: Deployment
  13. metadata:
  14. name: nginx
  15. traits:
  16. - type: scaler
  17. properties:
  18. replicas: 3
  19. policies:
  20. - type: take-over
  21. name: take-over
  22. properties:
  23. rules:
  24. - selector:
  25. resourceTypes: ["Deployment"]

In the above application, the nginx deployment will be added with owner labels and marked as belonged to the current app. The attached scaler trait in the application will modify the replica number of the target deployment to 3, while keeping all other fields untouched.

After the resource is taken over by the application, the application will control the upgrades and deletion of the target resource. Therefore, differ from read-only policy, each resource can only be managed by one application with take-over policy.

The take-over policy is helpful when you want to let the application to take the complete control for the given resources.

practice

  1. First create the nginx deployment
  1. kubectl create deploy nginx --image=nginx
  1. Deploy the application with take-over policy.
  1. cat <<EOF | vela up -f -
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: take-over
  6. spec:
  7. components:
  8. - name: nginx-take-over
  9. type: k8s-objects
  10. properties:
  11. objects:
  12. - apiVersion: apps/v1
  13. kind: Deployment
  14. metadata:
  15. name: nginx
  16. traits:
  17. - type: scaler
  18. properties:
  19. replicas: 3
  20. policies:
  21. - type: take-over
  22. name: take-over
  23. properties:
  24. rules:
  25. - selector:
  26. resourceTypes: ["Deployment"]
  27. EOF
  1. Check the application running status.
  1. vela status take-over

The read-only policy and take-over policy provide a way for users to directly adopt resources within KubeVela application api. If you prefer directly build KubeVela application by existing resources from scratch, you can use the vela adopt CLI command.

By providing a list of native Kubernetes resources, vela adopt command can help you automatically adopt those resources in an application. You can follow the below procedure to try it out.

  1. Create some resources for adoption.
  1. kubectl create deploy example --image=nginx
  2. kubectl create service clusterip example --tcp=80:80
  3. kubectl create configmap example
  4. kubectl create secret generic example
  1. Run vela adopt command to create an application that contains all the resource mentioned above.
  1. vela adopt deployment/example service/example configmap/example secret/example

expected output

```yaml apiVersion: core.oam.dev/v1beta1 kind: Application metadata: creationTimestamp: null labels: app.oam.dev/adopt: native name: example namespace: default spec: components: - name: example.Deployment.example properties: objects: - apiVersion: apps/v1 kind: Deployment metadata: name: example namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: example strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: example spec: containers: - image: nginx imagePullPolicy: Always name: nginx resources: terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: terminationGracePeriodSeconds: 30 type: k8s-objects - name: example.Service.example properties: objects: - apiVersion: v1 kind: Service metadata: name: example namespace: default spec: clusterIP: 10.43.65.46 clusterIPs: - 10.43.65.46 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: 80-80 port: 80 protocol: TCP targetPort: 80 selector: app: example sessionAffinity: None type: ClusterIP type: k8s-objects - name: example.config properties: objects: - apiVersion: v1 kind: ConfigMap metadata: name: example namespace: default - apiVersion: v1 kind: Secret metadata: name: example namespace: default type: k8s-objects policies: - name: read-only properties: rules: - selector: componentNames: - example.Deployment.example - example.Service.example - example.config type: read-only status: ```

By default, the application first embeds all the given resources in its components. Then it attaches the read-only policy. You can edit the returned configuration and make your own adoption application. Or you can directly apply this application with the --apply arg.

  1. vela adopt deployment/example service/example configmap/example secret/example --apply

You can also set the application name you would like to use.

  1. vela adopt deployment/example service/example configmap/example secret/example --apply --app-name=adopt-example

Now if you can use vela status and vela status -t -d command show the status the applied application.

  1. vela status adopt-example

expected output

  1. About:
  2. Name: adopt-example
  3. Namespace: default
  4. Created at: 2023-01-11 14:21:21 +0800 CST
  5. Status: running
  6. Workflow:
  7. mode: DAG-DAG
  8. finished: true
  9. Suspend: false
  10. Terminated: false
  11. Steps
  12. - id: 8d8capzw7e
  13. name: adopt-example.Deployment.example
  14. type: apply-component
  15. phase: succeeded
  16. - id: 6u6c6ai1gu
  17. name: adopt-example.Service.example
  18. type: apply-component
  19. phase: succeeded
  20. - id: r847uymujz
  21. name: adopt-example.config
  22. type: apply-component
  23. phase: succeeded
  24. Services:
  25. - Name: adopt-example.Deployment.example
  26. Cluster: local Namespace: default
  27. Type: k8s-objects
  28. Healthy
  29. No trait applied
  30. - Name: adopt-example.Service.example
  31. Cluster: local Namespace: default
  32. Type: k8s-objects
  33. Healthy
  34. No trait applied
  35. - Name: adopt-example.config
  36. Cluster: local Namespace: default
  37. Type: k8s-objects
  38. Healthy
  39. No trait applied
  1. vela status adopt-example -t -d
  1. CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
  2. local ─── default ─┬─ ConfigMap/example updated 2023-01-11 14:15:34 Data: 0 Age: 6m1s
  3. ├─ Secret/example updated 2023-01-11 14:15:52 Type: Opaque Data: 0 Age: 5m43s
  4. ├─ Service/example updated 2023-01-11 14:12:00 Type: ClusterIP Cluster-IP: 10.43.65.46 External-IP: <none> Port(s): 80/TCP Age: 9m35s
  5. └─ Deployment/example updated 2023-01-11 14:11:06 Ready: 1/1 Up-to-date: 1 Available: 1 Age: 10m

The read-only only allows the application to observe resources, but disallow any edits to it. If you want to make modifications you can use the --mode=take-over to use the take-over policy in the adoption application.

vela adopt also supports directly reading native resources from existing helm release. This is helpful if you previously deployed resources through helm.

  1. For example, you can firstly deploy a mysql instance through helm.
  1. helm repo add bitnami https://charts.bitnami.com/bitnami
  2. helm repo update
  3. helm install mysql bitnami/mysql
  1. You can validate the installation through helm ls.
  1. helm ls
  1. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  2. mysql default 1 2023-01-11 14:34:36.653778 +0800 CST deployed mysql-9.4.6 8.0.31
  1. Run vela adopt command to adopt resources from existing release. Similar to native resource adoption, you can get a KubeVela application with read-only policy.
  1. vela adopt helm --type helm

expected output

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app.oam.dev/adopt: helm
  7. name: mysql
  8. namespace: default
  9. spec:
  10. components:
  11. - name: mysql.StatefulSet.mysql
  12. properties:
  13. objects:
  14. - apiVersion: apps/v1
  15. kind: StatefulSet
  16. metadata:
  17. name: mysql
  18. namespace: default
  19. spec:
  20. podManagementPolicy: ""
  21. replicas: 1
  22. selector:
  23. matchLabels:
  24. app.kubernetes.io/component: primary
  25. app.kubernetes.io/instance: mysql
  26. app.kubernetes.io/name: mysql
  27. serviceName: mysql
  28. template:
  29. metadata:
  30. annotations:
  31. checksum/configuration: f8f3ad4a6e3ad93ae6ed28fdb7f7b4ff9585e08fa730e4e5845db5ebe5601e4d
  32. labels:
  33. app.kubernetes.io/component: primary
  34. app.kubernetes.io/instance: mysql
  35. app.kubernetes.io/managed-by: Helm
  36. app.kubernetes.io/name: mysql
  37. helm.sh/chart: mysql-9.4.6
  38. spec:
  39. affinity:
  40. nodeAffinity: null
  41. podAffinity: null
  42. podAntiAffinity:
  43. preferredDuringSchedulingIgnoredDuringExecution:
  44. - podAffinityTerm:
  45. labelSelector:
  46. matchLabels:
  47. app.kubernetes.io/instance: mysql
  48. app.kubernetes.io/name: mysql
  49. topologyKey: kubernetes.io/hostname
  50. weight: 1
  51. containers:
  52. - env:
  53. - name: BITNAMI_DEBUG
  54. value: "false"
  55. - name: MYSQL_ROOT_PASSWORD
  56. valueFrom:
  57. secretKeyRef:
  58. key: mysql-root-password
  59. name: mysql
  60. - name: MYSQL_DATABASE
  61. value: my_database
  62. envFrom: null
  63. image: docker.io/bitnami/mysql:8.0.31-debian-11-r30
  64. imagePullPolicy: IfNotPresent
  65. livenessProbe:
  66. exec:
  67. command:
  68. - /bin/bash
  69. - -ec
  70. - |
  71. password_aux="${MYSQL_ROOT_PASSWORD:-}"
  72. if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
  73. password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
  74. fi
  75. mysqladmin status -uroot -p"${password_aux}"
  76. failureThreshold: 3
  77. initialDelaySeconds: 5
  78. periodSeconds: 10
  79. successThreshold: 1
  80. timeoutSeconds: 1
  81. name: mysql
  82. ports:
  83. - containerPort: 3306
  84. name: mysql
  85. readinessProbe:
  86. exec:
  87. command:
  88. - /bin/bash
  89. - -ec
  90. - |
  91. password_aux="${MYSQL_ROOT_PASSWORD:-}"
  92. if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
  93. password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
  94. fi
  95. mysqladmin status -uroot -p"${password_aux}"
  96. failureThreshold: 3
  97. initialDelaySeconds: 5
  98. periodSeconds: 10
  99. successThreshold: 1
  100. timeoutSeconds: 1
  101. resources:
  102. limits: {}
  103. requests: {}
  104. securityContext:
  105. runAsNonRoot: true
  106. runAsUser: 1001
  107. startupProbe:
  108. exec:
  109. command:
  110. - /bin/bash
  111. - -ec
  112. - |
  113. password_aux="${MYSQL_ROOT_PASSWORD:-}"
  114. if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
  115. password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
  116. fi
  117. mysqladmin status -uroot -p"${password_aux}"
  118. failureThreshold: 10
  119. initialDelaySeconds: 15
  120. periodSeconds: 10
  121. successThreshold: 1
  122. timeoutSeconds: 1
  123. volumeMounts:
  124. - mountPath: /bitnami/mysql
  125. name: data
  126. - mountPath: /opt/bitnami/mysql/conf/my.cnf
  127. name: config
  128. subPath: my.cnf
  129. initContainers: null
  130. securityContext:
  131. fsGroup: 1001
  132. serviceAccountName: mysql
  133. volumes:
  134. - configMap:
  135. name: mysql
  136. name: config
  137. updateStrategy:
  138. type: RollingUpdate
  139. volumeClaimTemplates:
  140. - metadata:
  141. annotations: null
  142. labels:
  143. app.kubernetes.io/component: primary
  144. app.kubernetes.io/instance: mysql
  145. app.kubernetes.io/name: mysql
  146. name: data
  147. spec:
  148. accessModes:
  149. - ReadWriteOnce
  150. resources:
  151. requests:
  152. storage: 8Gi
  153. type: k8s-objects
  154. - name: mysql.Service.mysql
  155. properties:
  156. objects:
  157. - apiVersion: v1
  158. kind: Service
  159. metadata:
  160. name: mysql
  161. namespace: default
  162. spec:
  163. ports:
  164. - name: mysql
  165. nodePort: null
  166. port: 3306
  167. protocol: TCP
  168. targetPort: mysql
  169. selector:
  170. app.kubernetes.io/component: primary
  171. app.kubernetes.io/instance: mysql
  172. app.kubernetes.io/name: mysql
  173. sessionAffinity: None
  174. type: ClusterIP
  175. type: k8s-objects
  176. - name: mysql.Service.mysql-headless
  177. properties:
  178. objects:
  179. - apiVersion: v1
  180. kind: Service
  181. metadata:
  182. name: mysql-headless
  183. namespace: default
  184. spec:
  185. clusterIP: None
  186. ports:
  187. - name: mysql
  188. port: 3306
  189. targetPort: mysql
  190. publishNotReadyAddresses: true
  191. selector:
  192. app.kubernetes.io/component: primary
  193. app.kubernetes.io/instance: mysql
  194. app.kubernetes.io/name: mysql
  195. type: ClusterIP
  196. type: k8s-objects
  197. - name: mysql.config
  198. properties:
  199. objects:
  200. - apiVersion: v1
  201. kind: Secret
  202. metadata:
  203. name: mysql
  204. namespace: default
  205. - apiVersion: v1
  206. kind: ConfigMap
  207. metadata:
  208. name: mysql
  209. namespace: default
  210. type: k8s-objects
  211. - name: mysql.sa
  212. properties:
  213. objects:
  214. - apiVersion: v1
  215. kind: Secret
  216. metadata:
  217. name: mysql
  218. namespace: default
  219. - apiVersion: v1
  220. kind: ConfigMap
  221. metadata:
  222. name: mysql
  223. namespace: default
  224. type: k8s-objects
  225. policies:
  226. - name: read-only
  227. properties:
  228. rules:
  229. - selector:
  230. componentNames:
  231. - mysql.StatefulSet.mysql
  232. - mysql.Service.mysql
  233. - mysql.Service.mysql-headless
  234. - mysql.config
  235. - mysql.sa
  236. type: read-only
  237. status: {}
  1. You can similarly use --apply parameter to apply the application into cluster and use --mode=take-over to allow modifications by enforcing take-over policy. In addition to that, if you want to completely adopt resources in helm chart into KubeVela application and disable the management of that helm chart (prevent multiple sources), you can add --recycle flag to remove the helm release secret after the application has entered running status.
  1. vela adopt mysql --type helm --mode take-over --apply --recycle
  1. resources adopted in app default/mysql
  2. successfully clean up old helm release
  1. You can check the application status using vela status and vela status -t -d.
  1. vela status mysql

expected output

  1. About:
  2. Name: mysql
  3. Namespace: default
  4. Created at: 2023-01-11 14:40:16 +0800 CST
  5. Status: running
  6. Workflow:
  7. mode: DAG-DAG
  8. finished: true
  9. Suspend: false
  10. Terminated: false
  11. Steps
  12. - id: orq8dnqbyv
  13. name: mysql.StatefulSet.mysql
  14. type: apply-component
  15. phase: succeeded
  16. - id: k5kwoc49jv
  17. name: mysql.Service.mysql-headless
  18. type: apply-component
  19. phase: succeeded
  20. - id: p5qe1drkoh
  21. name: mysql.Service.mysql
  22. type: apply-component
  23. phase: succeeded
  24. - id: odicbhtf9a
  25. name: mysql.config
  26. type: apply-component
  27. phase: succeeded
  28. - id: o36adyqqal
  29. name: mysql.sa
  30. type: apply-component
  31. phase: succeeded
  32. Services:
  33. - Name: mysql.StatefulSet.mysql
  34. Cluster: local Namespace: default
  35. Type: k8s-objects
  36. Healthy
  37. No trait applied
  38. - Name: mysql.Service.mysql-headless
  39. Cluster: local Namespace: default
  40. Type: k8s-objects
  41. Healthy
  42. No trait applied
  43. - Name: mysql.Service.mysql
  44. Cluster: local Namespace: default
  45. Type: k8s-objects
  46. Healthy
  47. No trait applied
  48. - Name: mysql.config
  49. Cluster: local Namespace: default
  50. Type: k8s-objects
  51. Healthy
  52. No trait applied
  53. - Name: mysql.sa
  54. Cluster: local Namespace: default
  55. Type: k8s-objects
  56. Healthy
  57. No trait applied
  1. vela status mysql -t -d
  1. CLUSTER NAMESPACE RESOURCE STATUS APPLY_TIME DETAIL
  2. local ─── default ─┬─ ConfigMap/mysql updated 2023-01-11 14:40:16 Data: 1 Age: 7m41s
  3. ├─ Secret/mysql updated 2023-01-11 14:40:16 Type: Opaque Data: 2 Age: 7m41s
  4. ├─ Service/mysql updated 2023-01-11 14:40:16 Type: ClusterIP Cluster-IP: 10.43.154.7 External-IP: <none> Port(s): 3306/TCP Age: 7m41s
  5. ├─ Service/mysql-headless updated 2023-01-11 14:40:16 Type: ClusterIP Cluster-IP: None External-IP: <none> Port(s): 3306/TCP Age: 7m41s
  6. └─ StatefulSet/mysql updated 2023-01-11 14:40:16 Ready: 1/1 Age: 7m41s
  1. If you run helm ls you will not be able to find the original mysql helm release since the records have been recycled.
  1. helm ls
  1. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION

Resource Adoption - 图3提示

There are multiple ways to use KubeVela together with Helm.

If you want to use Helm to control the release process of charts and use KubeVela to monitor those resources, you can use the default mode (read-only) and do not recycle the helm release secret. In this case, you will be able to monitor resources dispatched by Helm Chart with KubeVela tools or eco-system (like viewing on Grafana).

If you want to migrate existing resources from Helm Chart to KubeVela application, you can use the take-over mode and use the --apply flag to recycle helm release records.

By default, vela adopt will take resources from given source (native resource list or helm chart) and group them into different components. For resources like Deployments or Statefulsets, the original spec will be reserved. For other resources like ConfigMap or Secret, the data will not be recorded in the adoption application (which means the application does not care for the content in them). For special resources (CustomResourceDefinition), the garbage-collect and apply-once policy will be additionally attached in the application.

The conversion from resources into application is achieved by using the CUE template. You can refer to GitHub to see the default template.

You can also build your own adoption rule using CUE and add --adopt-template to vela adopt command.

  1. For example, let’s create an example deployment.
  1. kubectl create deploy custom-adopt --image=nginx
  1. Create a file named my-adopt-rule.cue.
  1. import "list"
  2. #Resource: {
  3. apiVersion: string
  4. kind: string
  5. metadata: {
  6. name: string
  7. namespace?: string
  8. ...
  9. }
  10. ...
  11. }
  12. #Component: {
  13. type: string
  14. name: string
  15. properties: {...}
  16. dependsOn?: [...string]
  17. traits?: [...#Trait]
  18. }
  19. #Trait: {
  20. type: string
  21. properties: {...}
  22. }
  23. #Policy: {
  24. type: string
  25. name: string
  26. properties?: {...}
  27. }
  28. #Application: {
  29. apiVersion: "core.oam.dev/v1beta1"
  30. kind: "Application"
  31. metadata: {
  32. name: string
  33. namespace?: string
  34. labels?: [string]: string
  35. annotations?: [string]: string
  36. }
  37. spec: {
  38. components: [...#Component]
  39. policies?: [...#Policy]
  40. workflow?: {...}
  41. }
  42. }
  43. #AdoptOptions: {
  44. mode: *"read-only" | "take-over"
  45. type: *"helm" | string
  46. appName: string
  47. appNamespace: string
  48. resources: [...#Resource]
  49. ...
  50. }
  51. #Adopt: {
  52. $args: #AdoptOptions
  53. $returns: #Application
  54. // adopt logics
  55. $returns: #Application & {
  56. metadata: {
  57. name: $args.appName
  58. labels: "app.oam.dev/adopt": $args.type
  59. }
  60. spec: components: [for r in $args.resources if r.kind == "Deployment" {
  61. type: "webservice"
  62. name: r.metadata.name
  63. properties: image: r.spec.template.spec.containers[0].image
  64. traits: [{
  65. type: "scaler"
  66. properties: replicas: r.spec.replicas
  67. }]
  68. }]
  69. spec: policies: [
  70. {
  71. type: $args.mode
  72. name: $args.mode
  73. properties: rules: [{
  74. selector: componentNames: [ for comp in spec.components {comp.name}]
  75. }]
  76. }]
  77. }
  78. }

This customized adoption rule will automatically recognize deployment resources and convert it into KubeVela application’s webservice component. It can intelligently detect the replicas number of the given deployment and attach a scaler trait to the component.

  1. Run vela adopt deployment/custom-adopt --adopt-template=my-adopt-rule.cue. You can see the converted application as
  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. app.oam.dev/adopt: native
  7. name: custom-adopt
  8. spec:
  9. components:
  10. - name: custom-adopt
  11. properties:
  12. image: nginx
  13. traits:
  14. - properties:
  15. replicas: 1
  16. type: scaler
  17. type: webservice
  18. policies:
  19. - name: read-only
  20. properties:
  21. rules:
  22. - selector:
  23. componentNames:
  24. - custom-adopt
  25. type: read-only
  26. status: {}

With this capability, you can make your own rules for building application from existing resources or helm charts.

Last updated on 2023年2月9日 by dependabot[bot]