Faasm
Faasm is a high-performance stateful serverless runtime.
Faasm provides multi-tenant isolation, but also lets functions share regions of memory.These shared memory regions give low-latency concurrent access to data, and are synchronisedglobally to support large-scale parallelism.
Faasm combines software fault isolation from WebAssembly with standard Linux tools, to providesecurity and resource isolatation at low cost. Faasm runs functions side-by-side as threadsof a single runtime process, with low overheads and fast boot times. The underlying WebAssemblyexecution and code generation is handled by WAVM.
Faasm defines a custom host interface which lets functions performserverless-specific tasks (e.g. invoking other functions and managing state), as well as interactingwith the underlying host (e.g. using the filesystem and networking). The Faasm host interface achievesthe same goal as WASI, but in a serverless-specific context.
A preprint of our paper on Faasm can be found here.
Quick start
You can start a Faasm cluster locally using the docker-compose.yml
file in the root of the project:
- docker-compose up --scale worker=2
Then run the Faasm CLI, from which you can build, deploy and invoke functions:
- # Start the CLI
- ./bin/cli.sh
- # Upload the demo "hello" function
- inv upload demo hello
- # Invoke the function
- inv invoke demo hello
Note that the first time you run the local set-up it will generate some machine code specificto your host. This is stored in the machine-code
directory in the root of the project and reusedon subsequent runs.
More information
More detail on some key features and implementations can be found below:
- Usage and set-up - using the CLI and other features.
- C/C++ functions - writing and deploying Faasm functions in C/C++.
- Python functions - isolating and executing functions in Python.
- Rust functions - links and resources for writing Faasm Rust functions.
- Distributed state - sharing state between functions.
- Faasm host interface - the serverless-specific interface between functions and the underlying host.
- Kubernetes and Knative integration- deploying Faasm as part of a full serverless platform.
- Bare metal/ VM deployment - deploying Faasm on bare metal or VMs as a stand-alone system.
- Tensorflow Lite - performing inference in Faasm functions with TF Lite.
- API - invoking and managing functions and state through Faasm's HTTP API.
- MPI and OpenMP - executing existing MPI and OpenMP applications in Faasm.
- Local development - developing and modifying Faasm.
- Faasm.js - executing Faasm functions in the browser and on the server.
- Threading - executing multi-threaded applications.
- Proto-Functions - snapshot-and-restore to reduce cold starts.