Jobs - Run to Completion
A Job creates one or more Pods and ensures that a specified number of them successfully terminate.As pods successfully complete, the Job tracks the successful completions. When a specified numberof successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean upthe Pods it created.
A simple case is to create one Job object in order to reliably run one Pod to completion.The Job object will start a new Pod if the first Pod fails or is deleted (for exampledue to a node hardware failure or a node reboot).
You can also use a Job to run multiple Pods in parallel.
Running an example Job
Here is an example Job config. It computes π to 2000 places and prints it out.It takes around 10s to complete.
controllers/job.yaml |
---|
|
You can run the example with this command:
kubectl apply -f https://k8s.io/examples/controllers/job.yaml
job "pi" created
Check on the status of the Job with kubectl
:
kubectl describe jobs/pi
Name: pi
Namespace: default
Selector: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
job-name=pi
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Tue, 07 Jun 2016 10:56:16 +0200
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=b1db589a-2c8d-11e6-b324-0209dc45a495
job-name=pi
Containers:
pi:
Image: perl
Port:
Command:
perl
-Mbignum=bpi
-wle
print bpi(2000)
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
To view completed Pods of a Job, use kubectl get pods
.
To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
pi-aiw0a
Here, the selector is the same as the selector for the Job. The —output=jsonpath
option specifies an expressionthat just gets the name from each Pod in the returned list.
View the standard output of one of the pods:
kubectl logs $pods
The output is similar to this:
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
Writing a Job Spec
As with all other Kubernetes config, a Job needs apiVersion
, kind
, and metadata
fields.
A Job also needs a .spec
section.
Pod Template
The .spec.template
is the only required field of the .spec
.
The .spec.template
is a pod template. It has exactly the same schema as a pod, except it is nested and does not have an apiVersion
or kind
.
In addition to required fields for a Pod, a pod template in a Job must specify appropriatelabels (see pod selector) and an appropriate restart policy.
Only a RestartPolicy
equal to Never
or OnFailure
is allowed.
Pod Selector
The .spec.selector
field is optional. In almost all cases you should not specify it.See section specifying your own pod selector.
Parallel Jobs
There are three main types of task suitable to run as a Job:
- Non-parallel Jobs
- normally, only one Pod is started, unless the Pod fails.
- the Job is complete as soon as its Pod terminates successfully.
- Parallel Jobs with a fixed completion count:
- specify a non-zero positive value for
.spec.completions
. - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to
.spec.completions
. - not implemented yet: Each Pod is passed a different index in the range 1 to
.spec.completions
.
- specify a non-zero positive value for
- Parallel Jobs with a work queue:
- do not specify
.spec.completions
, default to.spec.parallelism
. - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
- each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
- when any Pod from the Job terminates with success, no new Pods are created.
- once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
- once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.For a non-parallel Job, you can leave both
.spec.completions
and.spec.parallelism
unset. When both areunset, both are defaulted to 1.
- do not specify
For a fixed completion count Job, you should set .spec.completions
to the number of completions needed.You can set .spec.parallelism
, or leave it unset and it will default to 1.
For a work queue Job, you must leave .spec.completions
unset, and set .spec.parallelism
toa non-negative integer.
For more information about how to make use of the different types of job, see the job patterns section.
Controlling Parallelism
The requested parallelism (.spec.parallelism
) can be set to any non-negative value.If it is unspecified, it defaults to 1.If it is specified as 0, then the Job is effectively paused until it is increased.
Actual parallelism (number of pods running at any instant) may be more or less than requestedparallelism, for a variety of reasons:
- For fixed completion count Jobs, the actual number of pods running in parallel will not exceed the number ofremaining completions. Higher values of
.spec.parallelism
are effectively ignored. - For work queue Jobs, no new Pods are started after any Pod has succeeded – remaining Pods are allowed to complete, however.
- If the Job ControllerA control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. has not had time to react.
- If the Job controller failed to create Pods for any reason (lack of
ResourceQuota
, lack of permission, etc.),then there may be fewer pods than requested. - The Job controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
- When a Pod is gracefully shut down, it takes time to stop.
Handling Pod and Container Failures
A container in a Pod may fail for a number of reasons, such as because the process in it exited witha non-zero exit code, or the container was killed for exceeding a memory limit, etc. If thishappens, and the .spec.template.spec.restartPolicy = "OnFailure"
, then the Pod stayson the node, but the container is re-run. Therefore, your program needs to handle the case when it isrestarted locally, or else specify .spec.template.spec.restartPolicy = "Never"
.See pod lifecycle for more information on restartPolicy
.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the.spec.template.spec.restartPolicy = "Never"
. When a Pod fails, then the Job controllerstarts a new Pod. This means that your application needs to handle the case when it is restarted in a newpod. In particular, it needs to handle temporary files, locks, incomplete output and the likecaused by previous runs.
Note that even if you specify .spec.parallelism = 1
and .spec.completions = 1
and.spec.template.spec.restartPolicy = "Never"
, the same program maysometimes be started twice.
If you do specify .spec.parallelism
and .spec.completions
both greater than 1, then there may bemultiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
Pod backoff failure policy
There are situations where you want to fail a Job after some amount of retriesdue to a logical error in configuration etc.To do so, set .spec.backoffLimit
to specify the number of retries beforeconsidering a Job as failed. The back-off limit is set by default to 6. FailedPods associated with the Job are recreated by the Job controller with anexponential back-off delay (10s, 20s, 40s …) capped at six minutes. Theback-off count is reset if no new failed Pods appear before the Job’s nextstatus check.
Note: Issue #54870 still exists for versions of Kubernetes prior to version 1.12
Note: If your job hasrestartPolicy = "OnFailure"
, keep in mind that your container running the Jobwill be terminated once the job backoff limit has been reached. This can make debugging the Job’s executable more difficult. We suggest settingrestartPolicy = "Never"
when debugging the Job or using a logging system to ensure outputfrom failed Jobs is not lost inadvertently.
Job Termination and Cleanup
When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them aroundallows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.The job object also remains after it is completed so that you can view its status. It is up to the user to deleteold jobs after noting their status. Delete the job with kubectl
(e.g. kubectl delete jobs/pi
or kubectl delete -f ./job.yaml
). When you delete the job using kubectl
, all the pods it created are deleted too.
By default, a Job will run uninterrupted unless a Pod fails (restartPolicy=Never
) or a Container exits in error (restartPolicy=OnFailure
), at which point the Job defers to the.spec.backoffLimit
described above. Once .spec.backoffLimit
has been reached the Job will be marked as failed and any running Pods will be terminated.
Another way to terminate a Job is by setting an active deadline.Do this by setting the .spec.activeDeadlineSeconds
field of the Job to a number of seconds.The activeDeadlineSeconds
applies to the duration of the job, no matter how many Pods are created.Once a Job reaches activeDeadlineSeconds
, all of its running Pods are terminated and the Job status will become type: Failed
with reason: DeadlineExceeded
.
Note that a Job’s .spec.activeDeadlineSeconds
takes precedence over its .spec.backoffLimit
. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds
, even if the backoffLimit
is not yet reached.
Example:
apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-timeout
spec:
backoffLimit: 5
activeDeadlineSeconds: 100
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
Note that both the Job spec and the Pod template spec within the Job have an activeDeadlineSeconds
field. Ensure that you set this field at the proper level.
Clean Up Finished Jobs Automatically
Finished Jobs are usually no longer needed in the system. Keeping them around inthe system will put pressure on the API server. If the Jobs are managed directlyby a higher level controller, such asCronJobs, the Jobs can becleaned up by CronJobs based on the specified capacity-based cleanup policy.
TTL Mechanism for Finished Jobs
FEATURE STATE: Kubernetes v1.12
alphaThis feature is currently in a alpha state, meaning:
- The version names contain alpha (e.g. v1alpha1).
- Might be buggy. Enabling the feature may expose bugs. Disabled by default.
- Support for feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
Another way to clean up finished Jobs (either Complete
or Failed
)automatically is to use a TTL mechanism provided by aTTL controller forfinished resources, by specifying the .spec.ttlSecondsAfterFinished
field ofthe Job.
When the TTL controller cleans up the Job, it will delete the Job cascadingly,i.e. delete its dependent objects, such as Pods, together with the Job. Notethat when the Job is deleted, its lifecycle guarantees, such as finalizers, willbe honored.
For example:
apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
The Job pi-with-ttl
will be eligible to be automatically deleted, 100
seconds after it finishes.
If the field is set to 0
, the Job will be eligible to be automatically deletedimmediately after it finishes. If the field is unset, this Job won’t be cleanedup by the TTL controller after it finishes.
Note that this TTL mechanism is alpha, with feature gate TTLAfterFinished
. Formore information, see the documentation forTTL controller forfinished resources.
Job Patterns
The Job object can be used to support reliable parallel execution of Pods. The Job object is notdesigned to support closely-communicating parallel processes, as commonly found in scientificcomputing. It does support parallel processing of a set of independent but related work items.These might be emails to be sent, frames to be rendered, files to be transcoded, ranges of keys in aNoSQL database to scan, and so on.
In a complex system, there may be multiple different sets of work items. Here we are justconsidering one set of work items that the user wants to manage together — a batch job.
There are several different patterns for parallel computation, each with strengths and weaknesses.The tradeoffs are:
- One Job object for each work item, vs. a single Job object for all work items. The latter isbetter for large numbers of work items. The former creates some overhead for the user and for thesystem to manage large numbers of Job objects.
- Number of pods created equals number of work items, vs. each Pod can process multiple work items.The former typically requires less modification to existing code and containers. The latteris better for large numbers of work items, for similar reasons to the previous bullet.
- Several approaches use a work queue. This requires running a queue service,and modifications to the existing program or container to make it use the work queue.Other approaches are easier to adapt to an existing containerised application.
The tradeoffs are summarized here, with columns 2 to 4 corresponding to the above tradeoffs.The pattern names are also links to examples and more detailed description.
Pattern | Single Job object | Fewer pods than work items? | Use app unmodified? | Works in Kube 1.1? |
---|---|---|---|---|
Job Template Expansion | ✓ | ✓ | ||
Queue with Pod Per Work Item | ✓ | sometimes | ✓ | |
Queue with Variable Pod Count | ✓ | ✓ | ✓ | |
Single Job with Static Work Assignment | ✓ | ✓ |
When you specify completions with .spec.completions
, each Pod created by the Job controllerhas an identical spec
. This means thatall pods for a task will have the same command line and the sameimage, the same volumes, and (almost) the same environment variables. These patternsare different ways to arrange for pods to work on different things.
This table shows the required settings for .spec.parallelism
and .spec.completions
for each of the patterns.Here, W
is the number of work items.
Pattern | .spec.completions | .spec.parallelism |
---|---|---|
Job Template Expansion | 1 | should be 1 |
Queue with Pod Per Work Item | W | any |
Queue with Variable Pod Count | 1 | any |
Single Job with Static Work Assignment | W | any |
Advanced Usage
Specifying your own pod selector
Normally, when you create a Job object, you do not specify .spec.selector
.The system defaulting logic adds this field when the Job is created.It picks a selector value that will not overlap with any other jobs.
However, in some cases, you might need to override this automatically set selector.To do this, you can specify the .spec.selector
of the Job.
Be very careful when doing this. If you specify a label selector which is notunique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelatedjob may be deleted, or this Job may count other Pods as completing it, or one or bothJobs may refuse to create Pods or run to completion. If a non-unique selector ischosen, then other controllers (e.g. ReplicationController) and their Pods may behavein unpredictable ways too. Kubernetes will not stop you from making a mistake whenspecifying .spec.selector
.
Here is an example of a case when you might want to use this feature.
Say Job old
is already running. You want existing Podsto keep running, but you want the rest of the Pods it createsto use a different pod template and for the Job to have a new name.You cannot update the Job because these fields are not updatable.Therefore, you delete Job old
but leave its podsrunning, using kubectl delete jobs/old —cascade=false
.Before deleting it, you make a note of what selector it uses:
kubectl get job old -o yaml
kind: Job
metadata:
name: old
...
spec:
selector:
matchLabels:
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...
Then you create a new Job with name new
and you explicitly specify the same selector.Since the existing Pods have label controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002
,they are controlled by Job new
as well.
You need to specify manualSelector: true
in the new Job since you are not usingthe selector that the system normally generates for you automatically.
kind: Job
metadata:
name: new
...
spec:
manualSelector: true
selector:
matchLabels:
controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
...
The new Job itself will have a different uid from a8f3d00d-c6d2-11e5-9f87-42010af00002
. SettingmanualSelector: true
tells the system to that you know what you are doing and to allow thismismatch.
Alternatives
Bare Pods
When the node that a Pod is running on reboots or fails, the pod is terminatedand will not be restarted. However, a Job will create new Pods to replace terminated ones.For this reason, we recommend that you use a Job rather than a bare Pod, even if your applicationrequires only a single Pod.
Replication Controller
Jobs are complementary to Replication Controllers.A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Jobmanages Pods that are expected to terminate (e.g. batch tasks).
As discussed in Pod Lifecycle, Job
is only appropriatefor pods with RestartPolicy
equal to OnFailure
or Never
.(Note: If RestartPolicy
is not set, the default value is Always
.)
Single Job starts Controller Pod
Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sortof custom controller for those Pods. This allows the most flexibility, but may be somewhatcomplicated to get started with and offers less integration with Kubernetes.
One example of this pattern would be a Job which starts a Pod which runs a script that in turnstarts a Spark master controller (see spark example), runs a sparkdriver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Jobobject, but complete control over what Pods are created and how work is assigned to them.
Cron Jobs
You can use a CronJob
to create a Job that will run at specified times/dates, similar to the Unix tool cron
.
Feedback
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it onStack Overflow.Open an issue in the GitHub repo if you want toreport a problemorsuggest an improvement.