API Documentation

JUG: Coarse Level Parallelisation for Python

The main use of jug is from the command line:

  1. jug status jugfile.py
  2. jug execute jugfile.py

Where jugfile.py is a Python script using the jug library.

class jug.``Task(f, dep0, dep1, , kw_arg0=kw_val0, kw_arg1=kw_val1, )

Defines a task, which will call:

  1. f(dep0, dep1,..., kw_arg0=kw_val0, kw_arg1=kw_val1, ...)

See also

Attributes:
  • result

    Result value

    store
  • Methods

    can_load()Returns whether result is available.
    can_run()Returns true if all the dependencies have their results available.
    dependencies(self)for dep in task.dependencies():
    fail()Marks the task as failed
    hash()Returns the hash for this task.
    invalidate()Equivalent to t.store.remove(t.hash()).
    is_failed()is_failed = t.is_failed()
    is_loaded()Returns True if the task is already loaded
    is_locked()Note that only calling lock() and checking the result atomically checks for the lock().
    load()Loads the results from the storage backend.
    lock()Tries to lock the task for the current process.
    run([force, save, debug_mode])Performs the task.
    unload()Unload results (can be useful for saving memory).
    unload_recursive()Equivalent to.
    unlock()Releases the lock.
    value 
    • can_load()

      Returns whether result is available.

    • can_run()

      Returns true if all the dependencies have their results available.

    • dependencies(self)

      • for dep in task.dependencies():

    1. Iterates over all the first-level dependencies of task t
    2. <table rules="none"><colgroup><col> <col></colgroup><tbody valign="top"><tr><th>Parameters:</th><td><li><dt><strong>self</strong> : Task</dt><dd></dd></li></td></tr><tr><th>Returns:</th><td><li><dt><strong>deps</strong> : generator</dt><dd><p>A generator over all of <cite>self</cite>’s dependencies</p></dd></li></td></tr></tbody></table>
    3. See also
    4. - `recursive_dependencies`
    5. retrieve dependencies recursively
    • fail()

      Marks the task as failed

      If the lock was not held, and exception will be raised

    • hash()

      Returns the hash for this task.

      The results are cached, so the first call can be much slower than subsequent calls.

    • invalidate()

      Equivalent to t.store.remove(t.hash()). Useful for interactive use (i.e., in jug shell mode).

    • is_failed()

      is_failed = t.is_failed()

      Returns:
    • is_failed : boolean

      Whether the task is in failed state.

    • See also

    • is_loaded()

      Returns True if the task is already loaded

    • is_locked()

      Note that only calling lock() and checking the result atomically checks for the lock(). This function can be much faster, though, and, therefore is sometimes useful.

      Returns:
    • is_locked : boolean

      Whether the task appears to be locked.

    • See also

    • load()

      Loads the results from the storage backend.

      This function always loads from the backend even if the task is already loaded. You can use is_loaded as a check if you want to avoid this behaviour.

      Returns:
    • Nothing
    • lock()

      Tries to lock the task for the current process.

      Returns True if the lock was acquired. The correct usage pattern is:

      1. locked = task.lock()
      2. if locked:
      3. task.run()
      4. else:
      5. # someone else is already running this task!

      Not that using can_lock() can lead to race conditions. The above is the only fully correct method.

      Returns:
    • locked : boolean

      Whether the lock was obtained.

    • result

      Result value

    • run(force=False, save=True, debug_mode=False)

      Performs the task.

      Parameters:
    • force : boolean, optional

      if true, always run the task (even if it ran before) (default: False)

      save : boolean, optional

      if true, save the result to the store (default: True)

      debug_mode : boolean, optional

      whether to run in debug mode (adds extra checks)

    • Returns:
    • val : return value from Task
    • unload()

      Unload results (can be useful for saving memory).

    • unload_recursive()

      Equivalent to:

      1. for tt in recursive_dependencies(t): tt.unload()
    • unlock()

      Releases the lock.

      If the lock was not held, this may remove another thread’s lock!

    class jug.``Tasklet(base, f)

    A Tasklet is a light-weight Task.

    It looks like a Task, behaves like a Task, but its results are not saved in the backend.

    It is useful for very simple functions and is automatically generated on subscripting a Task object:

    1. t = Task(f, 1)
    2. tlet = t[0]

    tlet will be a Tasklet

    See also

    Attributes:
  • base
    f
  • Methods

    can_load 
    dependencies 
    unload 
    unload_recursive 
    value 

    class jug.``TaskGenerator(f)

    @TaskGenerator def f(arg0, arg1, …)

    Turns f from a function into a task generator.

    This means that calling f(arg0, arg1) results in: Task(f, arg0, arg1). This can make your jug-based code feel very similar to what you do with traditional Python.

    Methods

    call 

    class jug.``iteratetask(base, n)

    Examples:

    1. a,b = iteratetask(task, 2)
    2. for a in iteratetask(task, n):
    3. ...

    This creates an iterator that over the sequence task[0], task[1], ..., task[n-1].

    Parameters:
  • task : Task(let)
    n : integer
  • Returns:
  • iterator
  • jug.``value(obj)

    Loads a task object recursively. This correcly handles lists, dictonaries and eny other type handled by the tasks themselves.

    Parameters:
  • obj : object

    Anything that can be pickled or a Task

  • Returns:
  • value : object

    The result of the task obj

  • jug.``CachedFunction(f, \args, **kwargs*)

    is equivalent to:

    1. task = Task(f, *args, **kwargs)
    2. if not task.can_load():
    3. task.run()
    4. value = task.value()

    That is, it calls the function if the value is available, but caches the result for the future.

    You can often use bvalue to achieve similar results:

    1. task = Task(f, *args, **kwargs)
    2. value = bvalues(task)

    This alternative method is more flexible, but will only be execute lazily. In particular, a jug status will not see past the bvalue call until jug execute is called to execute f, while a CachedFunction object will always execute.

    Parameters:
  • f : function

    Any function except unnamed (lambda) functions

  • Returns:
  • value : result

    Result of calling f(args,*kwargs)

  • See also

    • bvalue

      function An alternative way to achieve similar results to CachedFunction(f) is using bvalue.

    jug.``CompoundTask(f, \args, **kwargs*)

    f should be such that it returns a Task, which can depend on other Tasks (even recursively).

    If this cannot been loaded (i.e., has not yet been run), then this becomes equivalent to:

    1. f(*args, **kwargs)

    However, if it can, then we get a pseudo-task which returns the same value without f ever being executed.

    Parameters:
  • f : function returning a jug.Task
  • Returns:
  • task : jug.Task
  • jug.``CompoundTaskGenerator(f)

    @CompoundTaskGenerator def f(arg0, arg1, …)

    Turns f from a function into a compound task generator.

    This means that calling f(arg0, arg1) results in: CompoundTask(f, arg0, arg1)

    See also

    jug.``barrier()

    In a jug file, it assures that all tasks defined up to now have been completed. If not, parsing will (temporarily) stop at that point.

    This ensures that, after calling barrier() you are free to call value() to get any needed results.

    See also

    • bvalue

      function Restricted version of this function. Often faster

    jug.``bvalue(t)

    Named after barrier``+``value, value = bvalue(t) is similar to:

    1. barrier()
    2. value = value(t)

    except that it only checks that t is complete (and not all tasks) and thus can be much faster than a full barrier() call.

    Thus, bvalue stops interpreting the Jugfile if its argument has not run yet. When it has run, then it returns its value.

    See also

    • barrier

      Checks that all tasks have results available.

    jug.``set_jugdir(jugdir)

    Sets the jugdir. This is the programmatic equivalent of passing --jugdir=... on the command line.

    Parameters:
  • jugdir : str
  • Returns:
  • store : a jug backend
  • jug.``init(jugfile={‘jugfile’}, jugdir={‘jugdata’}, on_error=’exit’, store=None)

    Initializes jug (create backend connection, …). Imports jugfile

    Parameters:
  • jugfile : str, optional

    jugfile to import (default: ‘jugfile’)

    jugdir : str, optional

    jugdir to use (could be a path)

    on_error : str, optional

    What to do if import fails (default: exit)

    store : storage object, optional

    If used, this is returned as store again.

  • Returns:
  • store : storage object
    jugspace : dictionary
  • jug.``is_jug_running()

    Returns True if this script is being executed by jug instead of regular Python

    Task: contains the Task class.

    This is the main class for using jug.

    There are two main alternatives:

    • Use the Task class directly to build up tasks, such as Task(function, arg0, ...).
    • Rely on the TaskGenerator decorator as a shortcut for this.

    class jug.task.``Task(f, dep0, dep1, , kw_arg0=kw_val0, kw_arg1=kw_val1, )

    Defines a task, which will call:

    1. f(dep0, dep1,..., kw_arg0=kw_val0, kw_arg1=kw_val1, ...)

    See also

    Attributes:
  • result

    Result value

    store
  • Methods

    can_load()Returns whether result is available.
    can_run()Returns true if all the dependencies have their results available.
    dependencies(self)for dep in task.dependencies():
    fail()Marks the task as failed
    hash()Returns the hash for this task.
    invalidate()Equivalent to t.store.remove(t.hash()).
    is_failed()is_failed = t.is_failed()
    is_loaded()Returns True if the task is already loaded
    is_locked()Note that only calling lock() and checking the result atomically checks for the lock().
    load()Loads the results from the storage backend.
    lock()Tries to lock the task for the current process.
    run([force, save, debug_mode])Performs the task.
    unload()Unload results (can be useful for saving memory).
    unload_recursive()Equivalent to.
    unlock()Releases the lock.
    value 
    • can_load()

      Returns whether result is available.

    • can_run()

      Returns true if all the dependencies have their results available.

    • dependencies(self)

      • for dep in task.dependencies():

    1. Iterates over all the first-level dependencies of task t
    2. <table rules="none"><colgroup><col> <col></colgroup><tbody valign="top"><tr><th>Parameters:</th><td><li><dt><strong>self</strong> : Task</dt><dd></dd></li></td></tr><tr><th>Returns:</th><td><li><dt><strong>deps</strong> : generator</dt><dd><p>A generator over all of <cite>self</cite>’s dependencies</p></dd></li></td></tr></tbody></table>
    3. See also
    4. - [`recursive_dependencies`](#jug.task.recursive_dependencies "jug.task.recursive_dependencies")
    5. retrieve dependencies recursively
    • fail()

      Marks the task as failed

      If the lock was not held, and exception will be raised

    • hash()

      Returns the hash for this task.

      The results are cached, so the first call can be much slower than subsequent calls.

    • invalidate()

      Equivalent to t.store.remove(t.hash()). Useful for interactive use (i.e., in jug shell mode).

    • is_failed()

      is_failed = t.is_failed()

      Returns:
    • is_failed : boolean

      Whether the task is in failed state.

    • See also

    • is_loaded()

      Returns True if the task is already loaded

    • is_locked()

      Note that only calling lock() and checking the result atomically checks for the lock(). This function can be much faster, though, and, therefore is sometimes useful.

      Returns:
    • is_locked : boolean

      Whether the task appears to be locked.

    • See also

    • load()

      Loads the results from the storage backend.

      This function always loads from the backend even if the task is already loaded. You can use is_loaded as a check if you want to avoid this behaviour.

      Returns:
    • Nothing
    • lock()

      Tries to lock the task for the current process.

      Returns True if the lock was acquired. The correct usage pattern is:

      1. locked = task.lock()
      2. if locked:
      3. task.run()
      4. else:
      5. # someone else is already running this task!

      Not that using can_lock() can lead to race conditions. The above is the only fully correct method.

      Returns:
    • locked : boolean

      Whether the lock was obtained.

    • result

      Result value

    • run(force=False, save=True, debug_mode=False)

      Performs the task.

      Parameters:
    • force : boolean, optional

      if true, always run the task (even if it ran before) (default: False)

      save : boolean, optional

      if true, save the result to the store (default: True)

      debug_mode : boolean, optional

      whether to run in debug mode (adds extra checks)

    • Returns:
    • val : return value from Task
    • unload()

      Unload results (can be useful for saving memory).

    • unload_recursive()

      Equivalent to:

      1. for tt in recursive_dependencies(t): tt.unload()
    • unlock()

      Releases the lock.

      If the lock was not held, this may remove another thread’s lock!

    class jug.task.``Tasklet(base, f)

    A Tasklet is a light-weight Task.

    It looks like a Task, behaves like a Task, but its results are not saved in the backend.

    It is useful for very simple functions and is automatically generated on subscripting a Task object:

    1. t = Task(f, 1)
    2. tlet = t[0]

    tlet will be a Tasklet

    See also

    Attributes:
  • base
    f
  • Methods

    can_load 
    dependencies 
    unload 
    unload_recursive 
    value 

    jug.task.``recursive_dependencies(t, max_level=-1)

    • for dep in recursive_dependencies(t, max_level=-1):

    Returns a generator that lists all recursive dependencies of task

    Parameters:
  • t : Task

    input task

    max_level : integer, optional

    Maximum recursion depth. Set to -1 or None for no recursion limit.

  • Returns:
  • deps : generator

    A generator over all dependencies

  • class jug.task.``TaskGenerator(f)

    @TaskGenerator def f(arg0, arg1, …)

    Turns f from a function into a task generator.

    This means that calling f(arg0, arg1) results in: Task(f, arg0, arg1). This can make your jug-based code feel very similar to what you do with traditional Python.

    Methods

    call 

    class jug.task.``iteratetask(base, n)

    Examples:

    1. a,b = iteratetask(task, 2)
    2. for a in iteratetask(task, n):
    3. ...

    This creates an iterator that over the sequence task[0], task[1], ..., task[n-1].

    Parameters:
  • task : Task(let)
    n : integer
  • Returns:
  • iterator
  • jug.task.``value(obj)

    Loads a task object recursively. This correcly handles lists, dictonaries and eny other type handled by the tasks themselves.

    Parameters:
  • obj : object

    Anything that can be pickled or a Task

  • Returns:
  • value : object

    The result of the task obj

  • mapreduce: Build tasks that follow a map-reduce pattern.

    jug.mapreduce.``mapreduce(reducer, mapper, inputs, map_step=4, reduce_step=8)

    Create a task that does roughly the following:

    1. reduce(reducer, map(mapper, inputs))

    Roughly because the order of operations might be different. In particular, reducer should be a true reducer functions (i.e., commutative and associative).

    Parameters:
  • reducer : associative, commutative function
  • This should map

    Y_0,Y_1 -> Y’

  • mapper : function from X -> Y
    inputs : list of X
    map_step : integer, optional

    Number of mapping operations to do in one go. This is what defines an inner task. (default: 4)

    reduce_step : integer, optional

    Number of reduce operations to do in one go. (default: 8)

  • Returns:
  • task : jug.Task object
  • jug.mapreduce.``map(mapper, sequence, map_step=4)

    sequence’ = map(mapper, sequence, map_step=4)

    Roughly equivalent to:

    1. sequence' = [Task(mapper,s) for s in sequence]

    except that the tasks are grouped in groups of map_step

    Parameters:
  • mapper : function

    function from A -> B

    sequence : list of A
    map_step : integer, optional

    nr of elements to process per task. This should be set so that each task takes the right amount of time.

  • Returns:
  • sequence’ : list of B

    sequence’[i] = mapper(sequence[i])

  • See also

    • mapreduce

      currymap

      function Curried version of this function

    jug.mapreduce.``reduce(reducer, inputs, reduce_step=8)

    task = reduce(reducer, inputs, reduce_step=8)

    Parameters:
  • reducer : associative, commutative function
  • This should map

    Y_0,Y_1 -> Y’

  • inputs : list of X
    reduce_step : integer, optional

    Number of reduce operations to do in one go. (default: 8)

  • Returns:
  • task : jug.Task object
  • See also

    jug.compound.``CompoundTask(f, \args, **kwargs*)

    f should be such that it returns a Task, which can depend on other Tasks (even recursively).

    If this cannot been loaded (i.e., has not yet been run), then this becomes equivalent to:

    1. f(*args, **kwargs)

    However, if it can, then we get a pseudo-task which returns the same value without f ever being executed.

    Parameters:
  • f : function returning a jug.Task
  • Returns:
  • task : jug.Task
  • jug.compound.``CompoundTaskGenerator(f)

    @CompoundTaskGenerator def f(arg0, arg1, …)

    Turns f from a function into a compound task generator.

    This means that calling f(arg0, arg1) results in: CompoundTask(f, arg0, arg1)

    See also

    • TaskGenerator

    jug.compound.``compound_task_execute(x, h)

    This is an internal function. Do not use directly.

    jug.utils.``timed_path(path)

    Returns an object that returns path when passed to a jug Task with the exception that it uses the paths mtime (modification time) and the file size in the hash. Thus, if the file is touched or changes size, this triggers an invalidation of the results (which propagates to all dependent tasks).

    Parameters:
  • ipath : str

    A filesystem path

  • Returns:
  • opath : str

    A task equivalent to (lambda: ipath).

  • jug.utils.``identity(x)

    identity implements the identity function as a Task (i.e., value(identity(x)) == x)

    This seems pointless, but if x is, for example, a very large list, then using this function might speed up some computations. Consider:

    1. large = list(range(100000))
    2. large = jug.utils.identity(large)
    3. for i in range(100):
    4. Task(process, large, i)

    This way the list large is going to get hashed just once. Without the call to jug.utils.identity, it would get hashed at each loop iteration.

    https://jug.readthedocs.io/en/latest/utilities.html#identity

    Parameters:
  • x : any object
  • Returns:
  • x : x
  • class jug.utils.``CustomHash(obj, hash_function)

    Set a custom hash function

    This is an advanced feature and you can shoot yourself in the foot with it. Make sure you know what you are doing. In particular, hash_function should be a strong hash: hash_function(obj0) == hash_function(obj1) is taken to imply that obj0 == obj1. The hash function should return a bytes object.

    You can use the helpers in the jug.hash module (in particular hash_one) to help you. The implementation of timed_path is a good example of how to use a CustomHash:

    1. def hash_with_mtime_size(path):
    2. from .hash import hash_one
    3. st = os.stat_result(os.stat(path))
    4. mtime = st.st_mtime
    5. size = st.st_size
    6. return hash_one((path, mtime, size))
    7. def timed_path(path):
    8. return CustomHash(path, hash_with_mtime_size)

    The path object (a string or bytes) is wrapped with a hashing function which checks the file value.

    Parameters:
  • obj : any object
    hash_function : function

    This should take your object and return a str

  • jug.utils.``sync_move(src, dst)

    Sync the file and move it

    This ensures that the move is truly atomic

    Parameters:
  • src : filename

    Source file

    dst: filename

    Destination file

  • jug.utils.``cached_glob(pat)

    A short-hand for

    from jug import CachedFunction from glob import glob CachedFunction(glob, pattern)

    with the extra bonus that results are returns sorted

    Parameters:
  • pat: Same as glob.glob
  • Returns:
  • files : list of str
  • jug.hooks.exit_checks.``exit_after_n_tasks(n)

    Exit after a specific number of tasks have been executed

    Parameters:
  • n : int

    Number of tasks to execute

  • jug.hooks.exit_checks.``exit_after_time(hours=0, minutes=0, seconds=0)

    Exit after a specific number of tasks have been executed

    Note that this only checks the time after each task has finished executing. Thus if you are using this to limit the amount of time each process takes, make sure to specify a lower limit than what is needed.

    Parameters:
  • hours : number, optional
    minutes : number, optional
    seconds : number, optional
  • jug.hooks.exit_checks.``exit_env_vars(environ={os.environ})

    Set exit markers based on the environment.

    The following variables are used if they are set (if they are not set, they are ignored).

    JUG_MAX_TASKS: Maximum nr. of tasks.

    JUG_MAX_HOURS: Maximum hours

    JUG_MAX_MINUTES: Maximum minutes

    JUG_MAX_SECONDS: Maximum seconds

    For the time based limits, see the comment on exit_after_time on how these limits are not strict as they are only checked after each task completion event.

    If either of the variables above is set, its value should be an int or an error will be raised.

    JUG_EXIT_IF_FILE_EXISTS: Set exit file name

    See also

    jug.hooks.exit_checks.``exit_if_file_exists(fname)

    Before each task execute, check if file exists. If so, exit.

    Note that a check is only performed before a Task is execute. Thus, jug will not exit immediately if it is executing another long-running task.

    Parameters:
  • fname : str

    path to check

  • jug.hooks.exit_checks.``exit_when_true(f, function_takes_Task=False)

    Generic exit check.

    After each task, call function f and exit if it return true.

    Parameters:
  • f : function

    Function to call

    function_takes_Task : boolean, optional

    Whether to call the function with the task just executed (default: False)

  • Jug.IO module

    • write_task_out: write out results, possibly with metadata.

    class jug.io.``NoLoad(base)

    NoLoad can be used to decorate a Task result such that when it is passed to another Task, then it is passed directly (instead of passing the result).

    This is for advanced usage.

    Attributes:
  • base
    f
    t
  • Methods

    can_load 
    dependencies 
    unload 
    unload_recursive 
    value 

    jug.io.``write_task_out(result, oname, metadata_fname=None, metadata_format=’yaml’)

    Write out the results of a Task to file, possibly including metadata.

    If metadata_fname is not None, it should be the name of a file to which to write metadata on the computation.

    Parameters:
  • result: a Task object
    oname : str

    The target output filename

    metadata_fname : str, optional

    If not None, metadata will be written to this file.

    metadata_format : str, optional

    What format to write data in. Currently, ‘yaml’ & ‘json’ are supported.

  • jug.io.``write_metadata(result, metadata_fname, metadata_format=’yaml’)

    Write out the metadata on a Task out.

    Parameters:
  • result: a Task object
    metadata_fname : str

    metadata will be written to this file.

    metadata_format : str, optional

    What format to write data in. Currently, ‘yaml’ & ‘json’ are supported.

  • jug.io.``print_task_summary_table(options, groups)

    Print task summary table given tasks groups.

    groups - [(group_title, {(task_name, count)})] grouped summary of tasks.