How-To: Schedule and handle triggered jobs

Learn how to use the jobs API to schedule and handle triggered jobs

Now that you’ve learned what the jobs building block provides, let’s look at an example of how to use the API. The code example below describes an application that schedules jobs for a database backup application and handles them at trigger time, also known as the time the job was sent back to the application because it reached it’s dueTime.

Start the Scheduler service

When you run dapr init in either self-hosted mode or on Kubernetes, the Dapr Scheduler service is started.

Set up the Jobs API

In your code, set up and schedule jobs within your application.

The following Go SDK code sample schedules the job named prod-db-backup. Job data is housed in a backup database ("my-prod-db") and is scheduled with ScheduleJobAlpha1. This provides the jobData, which includes:

  • The backup Task name
  • The backup task’s Metadata, including:
    • The database name (DBName)
    • The database location (BackupLocation)
  1. package main
  2. import (
  3. //...
  4. daprc "github.com/dapr/go-sdk/client"
  5. "github.com/dapr/go-sdk/examples/dist-scheduler/api"
  6. "github.com/dapr/go-sdk/service/common"
  7. daprs "github.com/dapr/go-sdk/service/grpc"
  8. )
  9. func main() {
  10. // Initialize the server
  11. server, err := daprs.NewService(":50070")
  12. // ...
  13. if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
  14. log.Fatalf("failed to register job event handler: %v", err)
  15. }
  16. log.Println("starting server")
  17. go func() {
  18. if err = server.Start(); err != nil {
  19. log.Fatalf("failed to start server: %v", err)
  20. }
  21. }()
  22. // ...
  23. // Set up backup location
  24. jobData, err := json.Marshal(&api.DBBackup{
  25. Task: "db-backup",
  26. Metadata: api.Metadata{
  27. DBName: "my-prod-db",
  28. BackupLocation: "/backup-dir",
  29. },
  30. },
  31. )
  32. // ...
  33. }

The job is scheduled with a Schedule set and the amount of Repeats desired. These settings determine a max amount of times the job should be triggered and sent back to the app.

In this example, at trigger time, which is @every 1s according to the Schedule, this job is triggered and sent back to the application up to the max Repeats (10).

  1. // ...
  2. // Set up the job
  3. job := daprc.Job{
  4. Name: "prod-db-backup",
  5. Schedule: "@every 1s",
  6. Repeats: 10,
  7. Data: &anypb.Any{
  8. Value: jobData,
  9. },
  10. }

At the trigger time, the prodDBBackupHandler function is called, executing the desired business logic for this job at trigger time. For example:

HTTP

When you create a job using Dapr’s Jobs API, Dapr will automatically assume there is an endpoint available at /job/<job-name>. For instance, if you schedule a job named test, Dapr expects your application to listen for job events at /job/test. Ensure your application has a handler set up for this endpoint to process the job when it is triggered. For example:

Note: The following example is in Go but applies to any programming language.

  1. func main() {
  2. ...
  3. http.HandleFunc("/job/", handleJob)
  4. http.HandleFunc("/job/<job-name>", specificJob)
  5. ...
  6. }
  7. func specificJob(w http.ResponseWriter, r *http.Request) {
  8. // Handle specific triggered job
  9. }
  10. func handleJob(w http.ResponseWriter, r *http.Request) {
  11. // Handle the triggered jobs
  12. }

gRPC

When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following callback function:

Note: The following example is in Go but applies to any programming language with gRPC support.

  1. import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
  2. ...
  3. func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
  4. // Handle the triggered job
  5. }

This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that you register the callback server, which will invoke this function when a job is triggered:

  1. ...
  2. js := &JobService{}
  3. rtv1.RegisterAppCallbackAlphaServer(server, js)

In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly through this gRPC method.

SDKs

For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the event handler you set up during the server initialization. For example, in Go, you’d register the event handler like this:

  1. ...
  2. if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
  3. log.Fatalf("failed to register job event handler: %v", err)
  4. }

Dapr takes care of the underlying routing. When the job is triggered, your prodDBBackupHandler function is called with the triggered job data. Here’s an example of handling the triggered job:

  1. // ...
  2. // At job trigger time this function is called
  3. func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
  4. var jobData common.Job
  5. if err := json.Unmarshal(job.Data, &jobData); err != nil {
  6. // ...
  7. }
  8. var jobPayload api.DBBackup
  9. if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
  10. // ...
  11. }
  12. fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
  13. jobCount++
  14. return nil
  15. }

Run the Dapr sidecar

Once you’ve set up the Jobs API in your application, in a terminal window run the Dapr sidecar with the following command.

  1. dapr run --app-id=distributed-scheduler \
  2. --metrics-port=9091 \
  3. --dapr-grpc-port 50001 \
  4. --app-port 50070 \
  5. --app-protocol grpc \
  6. --log-level debug \
  7. go run ./main.go

Next steps

Last modified October 11, 2024: Fixed typo (#4389) (fe17926)