JetStream
JetStream
NATS has a built-in distributed persistence system called JetStream which enables new functionalities and higher qualities of service on top of the base ‘Core NATS’ functionalities and qualities of service.
JetStream is built-in to nats-server
and you only need 1 (or 3 or 5 if you want fault-tolerance against 1 or 2 simultaneous NATS server failures) of your NATS server(s) to be JetStream enabled for it to be available to all the client applications.
JetStream was created to solve the problems identified with streaming in technology today - complexity, fragility, and a lack of scalability. Some technologies address these better than others, but no current streaming technology is truly multi-tenant, horizontally scalable, and supports multiple deployment models. No other technology that we are aware of can scale from edge to cloud under the same security context while having complete deployment observability for operations.
Goals
JetStream was developed with the following goals in mind:
- The system must be easy to configure and operate and be observable.
- The system must be secure and operate well with NATS 2.0 security models.
- The system must scale horizontally and be applicable to a high ingestion rate.
- The system must support multiple use cases.
- The system must self-heal and always be available.
- The system must have an API that is closer to core NATS.
- The system must allow NATS messages to be part of a stream as desired.
- The system must display payload agnostic behavior.
- The system must not have third party dependencies.
Functionalities enabled by JetStream
Streaming: temporal decoupling between the publishers and subscribers
One of the tenets of basic publish/subscribe messaging is that there is a required temporal coupling between the publishers and the subscribers: subscribers only receive the messages that are published when they are actively connected to the messaging system (i.e. they do not receive messages that are published while they are not subscribing or not running or disconnected). The traditional way for messaging systems to provide temporal decoupling of the publishers and subscribers is through the ‘durable subscriber’ functionality or sometimes through ‘queues’, but neither one is perfect:
- durable subscribers need to be created before the messages get published
- queues are meant for workload distribution and consumption, not to be used as a mechanism for message replay.
However, nowadays a new way to provide this temporal de-coupling has been devised and has become ‘mainstream’: streaming. Streams capture and store messages published on one (or more) subject and allow client applications to create ‘subscribers’ (i.e. JetStream consumers) at any time to ‘replay’ (or consume) all or some of the messages stored in the stream.
Replay policies
JetStream consumers support multiple replay policies, depending on whether the consuming application wants to receive either:
- all of the messages currently stored in the stream, meaning a complete ‘replay’ and you can select the ‘replay policy’ (i.e. the speed of the replay) to be either:
- instant (meaning the messages are delivered to the consumer as fast as it can take them).
- original (meaning the messages are delivered to the consumer at the rate they were published into the stream, which can be very useful for example for staging production traffic).
- the last message stored in the stream, or the last message for each subject (as streams can capture more than one subject).
- starting from a specific sequence number.
- starting from a specific start time.
Retention policies and limits
It enables new functionalities and higher qualities of service on top of the base ‘Core NATS’ functionality. Practically speaking, streams can’t always just keep growing ‘forever’ and therefore JetStream support multiple retention policies as well as the ability to impose size limits on streams.
Limits
You can impose the following limits on a stream
- Maximum message age.
- Maximum total stream size (in bytes).
- Maximum number of messages in the stream.
- Maximum individual message size.
- You can specify a discard policy: when a limit is reached and a new message is published to the stream you can choose to discard either the oldest or the newest message currently in the stream in order to make room for that new message.
- You can also set limits on the number of consumers that can be defined for the stream at any given point in time.
You must also select a discard policy which specifies what should happen once the stream has reached one of its limits and a new message is published:
- discard old means that the stream will automatically delete the oldest message in the stream to make room for the new messages.
- disacrd new means that the new message is discarded (and the JetStream publish call returns an error indicating that a limit was reached).
Retention policy
You can choose what kind of retention you want for each stream:
- limits (the default).
- interest (messages are kept in the stream for as long as there are consumers that haven’t delivered the message yet).
- work queue (the stream is used as a shared queue and messages are removed from it as they are consumed).
Note that regardless of the retention policy selected, the limits (and the discard policy) always apply.
Persistent distributed storage
You can choose the durability as well as the resilience of the message storage according to your needs
- Memory storage.
- File storage.
- Replication (1 (none), 2, 3) between nats servers for Fault Tolerance.
JetStream uses a NATS optimized RAFT distributed quorum algorithm to distribute the persistence service between nats servers in a cluster while maintaining immediate consistency even in the face of Byzantine failures.
JetStream can also provide encryption at rest of the messages being stored.
In JetStream the configuration for storing messages is defined separately from how they are consumed. Storage is defined in a Stream and consuming messages is defined by multiple Consumers.
Stream replication factor
A stream’s replication factor (R, often referred to as the number ‘Replicas’) determines how many places it is stored allowing you to tune to balance risk with resource usage and performance. A stream that is easily rebuilt or temporary might be memory based with a R=1 and a stream that can tolerate some downtime might be file based R-1.
Typical usage to operate in typical outages and balance performance would be a filed based stream with R=3. A highly resilient, but less performant and more expensive configuration is R=5, the replication factor limit.
Rather than defaulting to the maximum, we suggest selecting the best option based on use case behind the stream. This optimizes resource usage to create a more resilient system at scale.
- Replicas=1 - Cannot operate during an outage of the server servicing the stream. Highly performant.
- Replicas=2 - No significant benefit at this time. We recommend using Replicas=3 instead.
- Replicas=3 - Can tolerate loss of one server servicing the stream. An ideal balance between risk and performance.
- Replicas=4 - No significant benefit over Replicas=3 except marginally in a 5 node cluster.
- Replicas=5 - Can tolerate simultaneous loss of two servers servicing the stream. Mitigates risk at the expense of performance.
Mirroring between streams
JetStream also allows server administrators to easily mirror streams, for example between different JetStream domains in order to offer disaster recovery. You can also define a stream as one of the sources for another stream.
De-coupled flow control
JetStream provides de-coupled flow control over streams, the flow control is not ‘end to end’ where the publisher(s) are limited to publish no faster than the slowest of all the consumers (i.e. the lowest common denominator) can receive, but is instead happening individually between each client application (publishers or consumers) and the nats server.
When using the JetStream publish calls to publish to streams there is an acknowledgement mechanism between the publisher and the nats server, and you have the choice of making synchronous or asynchronous (i.e. ‘batched’) JetStream publish calls.
On the subscriber side the sending of messages from the nats server to the client applications receiving or consuming messages from streams is also flow controlled.
Exactly once message delivery
Because publications to streams using the JetStream publish calls are acknowledged by the server the base quality of service offered by streams is ‘at least once‘, meaning that while reliable and normally duplicate free there are some specific failure scenarios that could result in a publishing application believing (wrongly) that a message was not published successfully and therefore publishing it again, and there are failure scenarios that could result in a client application’s consumption acknowledgement getting lost and therefore in the message being re-sent to the consumer by the server. Those failure scenarios while being rare and even difficult to reproduce do exist and can result in perceived ‘message duplication’ at the application level.
Therefore, JetStream also offers an ‘exactly once‘ quality of service. For the publishing side it relies on the publishing application attaching a unique message or publication id in a message header and on the server keeping track of those ids for a configurable rolling period of time in order to detect the publisher publishing the same message twice. For the subscribers a double acknowledgement mechanism is used to avoid a message being erroneously re-sent to a subscriber by the server after some kinds of failures.
Consumers
JetStream consumers are ‘views’ on a stream, they are subscribed to (or pulled) by client applications to receive copies of (or to consume if the stream is set as a working queue) messages stored in the stream.
Fast push consumers
Client applications can choose to use fast un-acknowledged push
(ordered) consumers to receive messages as fast as possible (for the selected replay policy) on a specified delivery subject or to an inbox. Those consumers are meant to be used to ‘replay’ rather than ‘consume’ the messages in a stream.
Horizontally scalable pull consumers with batching
Client applications can also use and share pull
consumers that are demand-driven, support batching and must explicitly acknowledge message reception and processing which means that they can be used to consume (i.e. use the stream as a distributed queue) as well as process the messages in a stream.
Pull consumers can and are meant to be shared between applications (just like queue groups) in order to provide easy and transparent horizontal scalability of the processing or consumption of messages in a stream without having (for example) to worry about having to define partitions or worry about fault-tolerance.
Note: using pull consumers doesn’t mean that you can’t get updates (new messages published into the stream) ‘pushed’ in real time to your application, as you can pass a (reasonable) timeout to the consumer’s Fetch call and call it in a loop.
Consumer acknowledgements
While you can decide to use un-acknowledged consumers trading quality of service for the fastest possible delivery of messages, most processing is not idem-potent and requires higher qualities of service (such as the ability to automatically recover from various failure scenarios that could result in some messages not being processed or being processed more than once) and you will want to use acknowledged consumers. JetStream supports more than one kind of acknowledgement:
- Some consumers support acknowledging all the messages up to the sequence number of the message being acknowledged, some consumers provide the highest quality of service but require acknowledging the reception and processing of each message explicitly as well as the maximum amount of time the server will wait for an acknowledgement for a specific message before re-delivering it (to another process attached to the consumer).
- You can also send back negative acknowledgements.
- You can even send in progress acknowledgements (to indicate that you are still processing the message in question and need more time before acking or nacking it).
Key Value Store
JetStream is a persistence layer, and streaming is only one of the functionalities built on top of that layer.
Another functionality (typically not available in or even associated with messaging systems) is the JetStream Key Value store: the ability to store, retrieve and delete value messages associated with a key, to watch (listen) for changes happening to that key and even to retrieve a history of the values (and deletions) that have happened on a particular key.
Object Store
NOTICE: Technology Preview
The Object Store functionality is similar to the Key Value Store but designed to store arbitrarily large ‘objects’ (e.g. files, even if they are very large) rather than ‘values’ that are message-sized (i.e. limited to 1Mb by default).
Legacy
Note that JetStream completely replaces the STAN legacy NATS streaming layer.