How to: Autoscale a Dapr app with KEDA

How to configure your Dapr application to autoscale using KEDA

Dapr, with its building-block API approach, along with the many pub/sub components, makes it easy to write message processing applications. Since Dapr can run in many environments (for example VMs, bare-metal, Cloud or Edge Kubernetes) the autoscaling of Dapr applications is managed by the hosting layer.

For Kubernetes, Dapr integrates with KEDA, an event driven autoscaler for Kubernetes. Many of Dapr’s pub/sub components overlap with the scalers provided by KEDA, so it’s easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.

In this guide, you configure a scalable Dapr application, along with the back pressure on Kafka topic. However, you can apply this approach to any pub/sub components offered by Dapr.

Note

If you’re working with Azure Container Apps, refer to the official Azure documentation for scaling Dapr applications using KEDA scalers.

Install KEDA

To install KEDA, follow the Deploying KEDA instructions on the KEDA website.

Install and deploy Kafka

If you don’t have access to a Kafka service, you can install it into your Kubernetes cluster for this example by using Helm:

  1. helm repo add confluentinc https://confluentinc.github.io/cp-helm-charts/
  2. helm repo update
  3. kubectl create ns kafka
  4. helm install kafka confluentinc/cp-helm-charts -n kafka \
  5. --set cp-schema-registry.enabled=false \
  6. --set cp-kafka-rest.enabled=false \
  7. --set cp-kafka-connect.enabled=false

To check on the status of the Kafka deployment:

  1. kubectl rollout status deployment.apps/kafka-cp-control-center -n kafka
  2. kubectl rollout status deployment.apps/kafka-cp-ksql-server -n kafka
  3. kubectl rollout status statefulset.apps/kafka-cp-kafka -n kafka
  4. kubectl rollout status statefulset.apps/kafka-cp-zookeeper -n kafka

Once installed, deploy the Kafka client and wait until it’s ready:

  1. kubectl apply -n kafka -f deployment/kafka-client.yaml
  2. kubectl wait -n kafka --for=condition=ready pod kafka-client --timeout=120s

Create the Kafka topic

Create the topic used in this example (demo-topic):

  1. kubectl -n kafka exec -it kafka-client -- kafka-topics \
  2. --zookeeper kafka-cp-zookeeper-headless:2181 \
  3. --topic demo-topic \
  4. --create \
  5. --partitions 10 \
  6. --replication-factor 3 \
  7. --if-not-exists

The number of topic partitions is related to the maximum number of replicas KEDA creates for your deployments.

Deploy a Dapr pub/sub component

Deploy the Dapr Kafka pub/sub component for Kubernetes. Paste the following YAML into a file named kafka-pubsub.yaml:

  1. apiVersion: dapr.io/v1alpha1
  2. kind: Component
  3. metadata:
  4. name: autoscaling-pubsub
  5. spec:
  6. type: pubsub.kafka
  7. version: v1
  8. metadata:
  9. - name: brokers
  10. value: kafka-cp-kafka.kafka.svc.cluster.local:9092
  11. - name: authRequired
  12. value: "false"
  13. - name: consumerID
  14. value: autoscaling-subscriber

The above YAML defines the pub/sub component that your application subscribes to and that you created earlier (demo-topic).

If you used the Kafka Helm install instructions, you can leave the brokers value as-is. Otherwise, change this value to the connection string to your Kafka brokers.

Notice the autoscaling-subscriber value set for consumerID. This value is used later to ensure that KEDA and your deployment use the same Kafka partition offset.

Now, deploy the component to the cluster:

  1. kubectl apply -f kafka-pubsub.yaml

Deploy KEDA autoscaler for Kafka

Deploy the KEDA scaling object that:

  • Monitors the lag on the specified Kafka topic
  • Configures the Kubernetes Horizontal Pod Autoscaler (HPA) to scale your Dapr deployment in and out

Paste the following into a file named kafka_scaler.yaml, and configure your Dapr deployment in the required place:

  1. apiVersion: keda.sh/v1alpha1
  2. kind: ScaledObject
  3. metadata:
  4. name: subscriber-scaler
  5. spec:
  6. scaleTargetRef:
  7. name: <REPLACE-WITH-DAPR-DEPLOYMENT-NAME>
  8. pollingInterval: 15
  9. minReplicaCount: 0
  10. maxReplicaCount: 10
  11. triggers:
  12. - type: kafka
  13. metadata:
  14. topic: demo-topic
  15. bootstrapServers: kafka-cp-kafka.kafka.svc.cluster.local:9092
  16. consumerGroup: autoscaling-subscriber
  17. lagThreshold: "5"

Let’s review a few metadata values in the file above:

ValuesDescription
scaleTargetRef/nameThe Dapr ID of your app defined in the Deployment (The value of the dapr.io/id annotation).
pollingIntervalThe frequency in seconds with which KEDA checks Kafka for current topic partition offset.
minReplicaCountThe minimum number of replicas KEDA creates for your deployment. If your application takes a long time to start, it may be better to set this to 1 to ensure at least one replica of your deployment is always running. Otherwise, set to 0 and KEDA creates the first replica for you.
maxReplicaCountThe maximum number of replicas for your deployment. Given how Kafka partition offset works, you shouldn’t set that value higher than the total number of topic partitions.
triggers/metadata/topicShould be set to the same topic to which your Dapr deployment subscribed (in this example, demo-topic).
triggers/metadata/bootstrapServersShould be set to the same broker connection string used in the kafka-pubsub.yaml file.
triggers/metadata/consumerGroupShould be set to the same value as the consumerID in the kafka-pubsub.yaml file.

Important

Setting the connection string, topic, and consumer group to the same values for both the Dapr service subscription and the KEDA scaler configuration is critical to ensure the autoscaling works correctly.

Deploy the KEDA scaler to Kubernetes:

  1. kubectl apply -f kafka_scaler.yaml

All done!

See the KEDA scaler work

Now that the ScaledObject KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. Learn more about configuring KEDA for Kafka topics.

As defined in the KEDA scaler manifest, you can now start publishing messages to your Kafka topic demo-topic and watch the pods autoscale when the lag threshold is higher than 5 topics. Publish messages to the Kafka Dapr component by using the Dapr Publish CLI command.

Next steps

Learn about scaling your Dapr pub/sub or binding application with KEDA in Azure Container Apps

Last modified June 19, 2023: Merge pull request #3565 from dapr/aacrawfi/skip-secrets-close (b1763bf)