Canary rollout

In this section, we will introduce how to canary rollout a container service.

Before starting

  1. Enable kruise-rollout addon, our canary rollout capability relies on the rollouts from OpenKruise.

    1. $ vela addon enable kruise-rollout
    2. Addon: kruise-rollout enabled Successfully.
  2. Please make sure one of the ingress controllers is available in your cluster. You can also enable the ingress-nginx addon if you don’t have any:

    1. vela addon enable ingress-nginx

    Please refer to the addon doc to get the access address of gateway.

  3. Some of the commands such as rollback relies on vela-cli >=1.5.0-alpha.1, please upgrade the command line for convenience. You don’t need to upgrade the controller.

First Time Deploy

When you want to use the canary rollout for every upgrade, you should ALWAYS have a kruise-rollout trait on your component. The day-2 canary rollout of the component need you have this trait attached already. Deploy the application with traits like below:

  1. cat <<EOF | vela up -f -
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: canary-demo
  6. annotations:
  7. app.oam.dev/publishVersion: v1
  8. spec:
  9. components:
  10. - name: canary-demo
  11. type: webservice
  12. properties:
  13. image: barnett/canarydemo:v1
  14. ports:
  15. - port: 8090
  16. traits:
  17. - type: scaler
  18. properties:
  19. replicas: 5
  20. - type: gateway
  21. properties:
  22. domain: canary-demo.com
  23. http:
  24. "/version": 8090
  25. - type: kruise-rollout
  26. properties:
  27. canary:
  28. steps:
  29. # The first batch of Canary releases 20% Pods, and 20% traffic imported to the new version, require manual confirmation before subsequent releases are completed
  30. - weight: 20
  31. # The second batch of Canary releases 90% Pods, and 90% traffic imported to the new version.
  32. - weight: 90
  33. trafficRoutings:
  34. - type: nginx
  35. EOF

Here’s an overview about what will happen when upgrade under this kruise-rollout trait configuration, the whole process will be divided into 3 steps:

  1. When the upgrade start, a new canary deployment will be created with 20% of the total replicas. In our example, we have 5 total replicas, it will keep all the old ones and create 5 * 20% = 1 for the new canary, and serve for 20% of the traffic. It will wait for a manual approval when everything gets ready.
    • By default, the percent of replicas are aligned with the traffic, you can also configure the replicas individually according to this doc.
  2. After the manual approval, the second batch starts. It will create 5 * 90% = 4.5 which is actually 5 replicas of new version in the system with the 90% traffic. As a result, the system will totally have 10 replicas now. It will wait for a second manual approval.
  3. After the second approval, it will update the workload which means leverage the rolling update mechanism of the workload itself for upgrade. After the workload finished the upgrade, all the traffic will route to that workload and the canary deployment will be destroyed.

Let’s continue our demo, the first deployment has no difference with a normal deploy, you can check the status of application to make sure it’s running for our next step.

  1. $ vela status canary-demo
  2. About:
  3. Name: canary-demo
  4. Namespace: default
  5. Created at: 2022-06-09 16:43:10 +0800 CST
  6. Status: running
  7. ...snip...
  8. Services:
  9. - Name: canary-demo
  10. Cluster: local Namespace: default
  11. Type: webservice
  12. Healthy Ready:5/5
  13. Traits:
  14. scaler gateway: No loadBalancer found, visiting by using 'vela port-forward canary-demo'
  15. kruise-rollout: rollout is healthy

If you have enabled velaux addon, you can view the application topology graph that all v1 pods are ready now.

image

Access the gateway endpoint with the specific host by:

  1. $ curl -H "Host: canary-demo.com" <ingress-controller-address>/version
  2. Demo: V1

The host canary-demo.com is aligned with the gateway trait in your application, you can also configure it in your /etc/hosts to use the host url for visiting.

Day-2 Canary Release

Let’s modify the image tag of the component, from v1 to v2 as follows:

  1. cat <<EOF | vela up -f -
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: canary-demo
  6. annotations:
  7. app.oam.dev/publishVersion: v2
  8. spec:
  9. components:
  10. - name: canary-demo
  11. type: webservice
  12. properties:
  13. image: barnett/canarydemo:v2
  14. ports:
  15. - port: 8090
  16. traits:
  17. - type: scaler
  18. properties:
  19. replicas: 5
  20. - type: gateway
  21. properties:
  22. domain: canary-demo.com
  23. http:
  24. "/version": 8090
  25. - type: kruise-rollout
  26. properties:
  27. canary:
  28. # The first batch of Canary releases 20% Pods, and 20% traffic imported to the new version, require manual confirmation before subsequent releases are completed
  29. steps:
  30. - weight: 20
  31. - weight: 90
  32. trafficRoutings:
  33. - type: nginx
  34. EOF

It will create a canary deployment and wait for manual approval, check the status of the application:

  1. $ vela status canary-demo
  2. About:
  3. Name: canary-demo
  4. Namespace: default
  5. Created at: 2022-06-09 16:43:10 +0800 CST
  6. Status: runningWorkflow
  7. ...snip...
  8. Services:
  9. - Name: canary-demo
  10. Cluster: local Namespace: default
  11. Type: webservice
  12. Unhealthy Ready:5/5
  13. Traits:
  14. scaler gateway: No loadBalancer found, visiting by using 'vela port-forward canary-demo'
  15. kruise-rollout: Rollout is in step(1/1), and you need manually confirm to enter the next step

The application’s status is runningWorkflow that means the application’s rollout process has not finished yet.

View topology graph again, you will see kruise-rollout trait created a v2 pod, and this pod will serve the canary traffic. Meanwhile, the pods of v1 are still running and server non-canary traffic.

image

Access the gateway endpoint again. You will find out there is about 20% chance to meet Demo: v2 result.

  1. $ curl -H "Host: canary-demo.com" <ingress-controller-address>/version
  2. Demo: V2

Continue Canary Process

After verify the success of the canary version through business-related metrics, such as logs, metrics, and other means, you can resume the workflow to continue the process of rollout.

  1. vela workflow resume canary-demo

Access the gateway endpoint again multi times. You will find out the chance (90%) to meet result Demo: v2 is highly increased.

  1. $ curl -H "Host: canary-demo.com" <ingress-controller-address>/version
  2. Demo: V2

Canary validation succeed, finished the release

In the end, you can resume again to finish the rollout process.

  1. vela workflow resume canary-demo

Access the gateway endpoint again multi times. You will find out the result always is Demo: v2.

  1. $ curl -H "Host: canary-demo.com" <ingress-controller-address>/version
  2. Demo: V2

Canary verification failed, rollback the release

If you want to cancel the rollout process and rollback the application to the latest version, after manually check. You can rollback the rollout workflow:

You should suspend the workflow before rollback:

  1. $ vela workflow suspend canary-demo
  2. Rollout default/canary-demo in cluster suspended.
  3. Successfully suspend workflow: canary-demo

Then rollback:

  1. $ vela workflow rollback canary-demo
  2. Application spec rollback successfully.
  3. Application status rollback successfully.
  4. Rollout default/canary-demo in cluster rollback.
  5. Successfully rollback rolloutApplication outdated revision cleaned up.

Access the gateway endpoint again. You can see the result is always Demo: V1.

  1. $ curl -H "Host: canary-demo.com" <ingress-controller-address>/version
  2. Demo: V1

Any rollback operation in middle of a runningWorkflow will rollback to the latest succeeded revision of this application. So, if you deploy a successful v1 and upgrade to v2, but this version didn’t succeed while you continue to upgrade to v3. The rollback of v3 will automatically to v1, because release v2 is not a succeeded one.