Getting started
Watermill up and running.
What is Watermill?
Watermill is a Golang library for working efficiently with message streams. It is intended for building event-drivenapplications. It can be used for event sourcing, RPC over messages, sagas, and whatever else comes to your mind.You can use conventional pub/sub implementations like Kafka or RabbitMQ, but also HTTP or MySQL binlog, if that fits your use case.
It comes with a set of Pub/Sub implementations and can be easily extended by your own.
Watermill also ships with standard middlewares like instrumentation, poison queue, throttling, correlation,and other tools used by every message-driven application.
Why use Watermill?
With more projects adopting the microservices pattern over recent years, we realized that synchronous communicationis not always the right choice. Asynchronous methods started to grow as a new standard way to communicate.
But while there’s a lot of existing tooling for synchronous integration patterns (e.g. HTTP), correctly setting upa message-oriented project can be a challenge. There’s a lot of different message queues and streaming systems,each with different features and client library API.
Watermill aims to be the standard messaging library for Go, hiding all that complexity behind an API that is easy tounderstand. It provides all you might need for building an application based on events or other asynchronous patterns.After looking at the examples, you should be able to quickly integrate Watermill with your project.
Install
go get -u github.com/ThreeDotsLabs/watermill
One Minute Background
The basic idea behind event-driven applications stays always the same: listen for incoming messages and react to them.Watermill supports this behavior for multiple publishers and subscribers.
The core part of Watermill is the Message. It is as important as http.Request
is for the http
package. Most Watermill features use this struct in some way.
Even though PubSub libraries come with complex features, for Watermill it’s enough to implement two interfaces to startworking with them: the Publisher
and Subscriber
.
type Publisher interface {
Publish(topic string, messages ...*Message) error
Close() error
}
type Subscriber interface {
Subscribe(ctx context.Context, topic string) (<-chan *Message, error)
Close() error
}
Subscribing for Messages
Let’s start with subscribing. Subscribe
expects a topic name and returns a channel of incoming messages.What topic exactly means depends on the PubSub implementation.
messages, err := subscriber.Subscribe(ctx, "example.topic")
if err != nil {
panic(err)
}
for msg := range messages {
fmt.Printf("received message: %s, payload: %s\n", msg.UUID, string(msg.Payload))
msg.Ack()
}
See detailed examples below for supported PubSubs.
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/go-channel/main.go
// ...
package main
import (
"context"
"log"
"time"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill/message"
"github.com/ThreeDotsLabs/watermill/pubsub/gochannel"
)
func main() {
pubSub := gochannel.NewGoChannel(
gochannel.Config{},
watermill.NewStdLogger(false, false),
)
messages, err := pubSub.Subscribe(context.Background(), "example.topic")
if err != nil {
panic(err)
}
go process(messages)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/go-channel/main.go
// ...
func process(messages <-chan *message.Message) {
for msg := range messages {
log.Printf("received message: %s, payload: %s", msg.UUID, string(msg.Payload))
// we need to Acknowledge that we received and processed the message,
// otherwise, it will be resent over and over again.
msg.Ack()
}
}
Running in Docker
The easiest way to run Watermill locally with Kafka is using Docker.
Full source: _examples/pubsubs/kafka/docker-compose.yml
version: '3'
services:
server:
image: golang:1.11
restart: unless-stopped
depends_on:
- kafka
volumes:
- .:/app
- $GOPATH/pkg/mod:/go/pkg/mod
working_dir: /app
command: go run main.go
zookeeper:
image: confluentinc/cp-zookeeper:latest
restart: unless-stopped
logging:
driver: none
environment:
ZOOKEEPER_CLIENT_PORT: 2181
kafka:
image: confluentinc/cp-kafka:latest
restart: unless-stopped
depends_on:
- zookeeper
logging:
driver: none
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
The source should go to main.go
.
To run, execute docker-compose up
command.
A more detailed explanation of how it is working (and how to add live code reload) can be found in Go Docker dev environment article.
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/kafka/main.go
// ...
package main
import (
"context"
"log"
"time"
"github.com/Shopify/sarama"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-kafka/v2/pkg/kafka"
"github.com/ThreeDotsLabs/watermill/message"
)
func main() {
saramaSubscriberConfig := kafka.DefaultSaramaSubscriberConfig()
// equivalent of auto.offset.reset: earliest
saramaSubscriberConfig.Consumer.Offsets.Initial = sarama.OffsetOldest
subscriber, err := kafka.NewSubscriber(
kafka.SubscriberConfig{
Brokers: []string{"kafka:9092"},
Unmarshaler: kafka.DefaultMarshaler{},
OverwriteSaramaConfig: saramaSubscriberConfig,
ConsumerGroup: "test_consumer_group",
},
watermill.NewStdLogger(false, false),
)
if err != nil {
panic(err)
}
messages, err := subscriber.Subscribe(context.Background(), "example.topic")
if err != nil {
panic(err)
}
go process(messages)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/kafka/main.go
// ...
func process(messages <-chan *message.Message) {
for msg := range messages {
log.Printf("received message: %s, payload: %s", msg.UUID, string(msg.Payload))
// we need to Acknowledge that we received and processed the message,
// otherwise, it will be resent over and over again.
msg.Ack()
}
}
Running in Docker
The easiest way to run Watermill locally with NATS is using Docker.
Full source: _examples/pubsubs/nats-streaming/docker-compose.yml
version: '3'
services:
server:
image: golang:1.11
restart: unless-stopped
depends_on:
- nats-streaming
volumes:
- .:/app
- $GOPATH/pkg/mod:/go/pkg/mod
working_dir: /app
command: go run main.go
nats-streaming:
image: nats-streaming:0.11.2
restart: unless-stopped
The source should go to main.go
.
To run execute docker-compose up
command.
A more detailed explanation of how it is working (and how to add live code reload) can be found in Go Docker dev environment article.
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/nats-streaming/main.go
// ...
package main
import (
"context"
"log"
"time"
stan "github.com/nats-io/stan.go"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-nats/pkg/nats"
"github.com/ThreeDotsLabs/watermill/message"
)
func main() {
subscriber, err := nats.NewStreamingSubscriber(
nats.StreamingSubscriberConfig{
ClusterID: "test-cluster",
ClientID: "example-subscriber",
QueueGroup: "example",
DurableName: "my-durable",
SubscribersCount: 4, // how many goroutines should consume messages
CloseTimeout: time.Minute,
AckWaitTimeout: time.Second * 30,
StanOptions: []stan.Option{
stan.NatsURL("nats://nats-streaming:4222"),
},
Unmarshaler: nats.GobMarshaler{},
},
watermill.NewStdLogger(false, false),
)
if err != nil {
panic(err)
}
messages, err := subscriber.Subscribe(context.Background(), "example.topic")
if err != nil {
panic(err)
}
go process(messages)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/nats-streaming/main.go
// ...
func process(messages <-chan *message.Message) {
for msg := range messages {
log.Printf("received message: %s, payload: %s", msg.UUID, string(msg.Payload))
// we need to Acknowledge that we received and processed the message,
// otherwise, it will be resent over and over again.
msg.Ack()
}
}
Running in Docker
You can run Google Cloud Pub/Sub emulator locally for development.
Full source: _examples/pubsubs/googlecloud/docker-compose.yml
version: '3'
services:
server:
image: golang:1.11
restart: unless-stopped
depends_on:
- googlecloud
volumes:
- .:/app
- $GOPATH/pkg/mod:/go/pkg/mod
environment:
# use local emulator instead of google cloud engine
PUBSUB_EMULATOR_HOST: "googlecloud:8085"
working_dir: /app
command: go run main.go
googlecloud:
image: google/cloud-sdk:228.0.0
entrypoint: gcloud --quiet beta emulators pubsub start --host-port=googlecloud:8085 --verbosity=debug --log-http
restart: unless-stopped
The source should go to main.go
.
To run, execute docker-compose up
.
A more detailed explanation of how it is working (and how to add live code reload) can be found in Go Docker dev environment article.
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/googlecloud/main.go
// ...
package main
import (
"context"
"log"
"time"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-googlecloud/pkg/googlecloud"
"github.com/ThreeDotsLabs/watermill/message"
)
func main() {
logger := watermill.NewStdLogger(false, false)
subscriber, err := googlecloud.NewSubscriber(
googlecloud.SubscriberConfig{
// custom function to generate Subscription Name,
// there are also predefined TopicSubscriptionName and TopicSubscriptionNameWithSuffix available.
GenerateSubscriptionName: func(topic string) string {
return "test-sub_" + topic
},
ProjectID: "test-project",
},
logger,
)
if err != nil {
panic(err)
}
// Subscribe will create the subscription. Only messages that are sent after the subscription is created may be received.
messages, err := subscriber.Subscribe(context.Background(), "example.topic")
if err != nil {
panic(err)
}
go process(messages)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/googlecloud/main.go
// ...
func process(messages <-chan *message.Message) {
for msg := range messages {
log.Printf("received message: %s, payload: %s", msg.UUID, string(msg.Payload))
// we need to Acknowledge that we received and processed the message,
// otherwise, it will be resent over and over again.
msg.Ack()
}
}
Running in Docker
Full source: _examples/pubsubs/amqp/docker-compose.yml
version: '3'
services:
server:
image: golang:1.11
restart: unless-stopped
depends_on:
- rabbitmq
volumes:
- .:/app
- $GOPATH/pkg/mod:/go/pkg/mod
working_dir: /app
command: go run main.go
rabbitmq:
image: rabbitmq:3.7
restart: unless-stopped
The source should go to main.go
.
To run, execute docker-compose up
.
A more detailed explanation of how it is working (and how to add live code reload) can be found in Go Docker dev environment article.
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/amqp/main.go
// ...
package main
import (
"context"
"log"
"time"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-amqp/pkg/amqp"
"github.com/ThreeDotsLabs/watermill/message"
)
var amqpURI = "amqp://guest:guest@rabbitmq:5672/"
func main() {
amqpConfig := amqp.NewDurableQueueConfig(amqpURI)
subscriber, err := amqp.NewSubscriber(
// This config is based on this example: https://www.rabbitmq.com/tutorials/tutorial-two-go.html
// It works as a simple queue.
//
// If you want to implement a Pub/Sub style service instead, check
// https://watermill.io/docs/pub-sub-implementations/#amqp-consumer-groups
amqpConfig,
watermill.NewStdLogger(false, false),
)
if err != nil {
panic(err)
}
messages, err := subscriber.Subscribe(context.Background(), "example.topic")
if err != nil {
panic(err)
}
go process(messages)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/amqp/main.go
// ...
func process(messages <-chan *message.Message) {
for msg := range messages {
log.Printf("received message: %s, payload: %s", msg.UUID, string(msg.Payload))
// we need to Acknowledge that we received and processed the message,
// otherwise, it will be resent over and over again.
msg.Ack()
}
}
Running in Docker
Full source: _examples/pubsubs/sql/docker-compose.yml
version: '3'
services:
server:
image: golang:1.12
restart: unless-stopped
depends_on:
- mysql
volumes:
- .:/app
- $GOPATH/pkg/mod:/go/pkg/mod
working_dir: /app
command: go run main.go
mysql:
image: mysql:8.0
restart: unless-stopped
ports:
- 3306:3306
environment:
MYSQL_DATABASE: watermill
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
The source should go to main.go
.
To run, execute docker-compose up
.
A more detailed explanation of how it is working (and how to add live code reload) can be found in Go Docker dev environment article.
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/sql/main.go
// ...
package main
import (
"context"
stdSQL "database/sql"
"log"
"time"
driver "github.com/go-sql-driver/mysql"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill-sql/pkg/sql"
"github.com/ThreeDotsLabs/watermill/message"
)
func main() {
db := createDB()
logger := watermill.NewStdLogger(false, false)
subscriber, err := sql.NewSubscriber(
db,
sql.SubscriberConfig{
SchemaAdapter: sql.DefaultSchema{},
OffsetsAdapter: sql.DefaultMySQLOffsetsAdapter{},
InitializeSchema: true,
},
logger,
)
if err != nil {
panic(err)
}
messages, err := subscriber.Subscribe(context.Background(), "example_topic")
if err != nil {
panic(err)
}
go process(messages)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/sql/main.go
// ...
func process(messages <-chan *message.Message) {
for msg := range messages {
log.Printf("received message: %s, payload: %s", msg.UUID, string(msg.Payload))
// we need to Acknowledge that we received and processed the message,
// otherwise, it will be resent over and over again.
msg.Ack()
}
}
Creating Messages
Watermill doesn’t enforce any message format. NewMessage
expects a slice of bytes as the payload. You can usestrings, JSON, protobuf, Avro, gob, or anything else that serializes to []byte
.
The message UUID is optional, but recommended, as it helps with debugging.
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
Publishing Messages
Publish
expects a topic and one or more Message
s to be published.
err := publisher.Publish("example.topic", msg)
if err != nil {
panic(err)
}
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/go-channel/main.go
// ...
go process(messages)
publishMessages(pubSub)
}
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
if err := publisher.Publish("example.topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/kafka/main.go
// ...
go process(messages)
publisher, err := kafka.NewPublisher(
kafka.PublisherConfig{
Brokers: []string{"kafka:9092"},
Marshaler: kafka.DefaultMarshaler{},
},
watermill.NewStdLogger(false, false),
)
if err != nil {
panic(err)
}
publishMessages(publisher)
}
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
if err := publisher.Publish("example.topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/nats-streaming/main.go
// ...
go process(messages)
publisher, err := nats.NewStreamingPublisher(
nats.StreamingPublisherConfig{
ClusterID: "test-cluster",
ClientID: "example-publisher",
StanOptions: []stan.Option{
stan.NatsURL("nats://nats-streaming:4222"),
},
Marshaler: nats.GobMarshaler{},
},
watermill.NewStdLogger(false, false),
)
if err != nil {
panic(err)
}
publishMessages(publisher)
}
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
if err := publisher.Publish("example.topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/googlecloud/main.go
// ...
go process(messages)
publisher, err := googlecloud.NewPublisher(googlecloud.PublisherConfig{
ProjectID: "test-project",
}, logger)
if err != nil {
panic(err)
}
publishMessages(publisher)
}
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
if err := publisher.Publish("example.topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/amqp/main.go
// ...
go process(messages)
publisher, err := amqp.NewPublisher(amqpConfig, watermill.NewStdLogger(false, false))
if err != nil {
panic(err)
}
publishMessages(publisher)
}
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
if err := publisher.Publish("example.topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
// ...
Full source: github.com/ThreeDotsLabs/watermill/_examples/pubsubs/sql/main.go
// ...
go process(messages)
publisher, err := sql.NewPublisher(
db,
sql.PublisherConfig{
SchemaAdapter: sql.DefaultSchema{},
},
logger,
)
if err != nil {
panic(err)
}
publishMessages(publisher)
}
func createDB() *stdSQL.DB {
conf := driver.NewConfig()
conf.Net = "tcp"
conf.User = "root"
conf.Addr = "mysql"
conf.DBName = "watermill"
db, err := stdSQL.Open("mysql", conf.FormatDSN())
if err != nil {
panic(err)
}
err = db.Ping()
if err != nil {
panic(err)
}
return db
}
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte(`{"message": "Hello, world!"}`))
if err := publisher.Publish("example_topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
// ...
Using Message Router
Publishers and subscribers are rather low-level parts of Watermill.In most cases, you’d usually want to use a high-level interface and features like correlation, metrics, poison queue, retrying, throttling, etc..
You might want to send an Ack only if the message was processed successfully.In other cases, you’ll Ack immediately and then worry about processing.Sometimes, you want to perform some action based on the incoming message, and publish another message in response.
To handle these requirements, there is a component named Router.
Example application of Message Router
The flow of the example application looks like this:
- A message is produced on topic
incoming_messages_topic
every second. struct_handler
handler listens onincoming_messages_topic
. When a message is received, the UUID is printed and a new message is produced onoutgoing_messages_topic
.print_incoming_messages
handler listens onincoming_messages_topic
and prints the messages’ UUID, payload and metadata.print_outgoing_messages
handler listens onoutgoing_messages_topic
and prints the messages’ UUID, payload and metadata. Correlation ID should be the same as in the message onincoming_messages_topic
.
Router configuration
Start with configuring the router, adding plugins and middlewares.Then set up handlers that the router will use. Each handler will independently handle messages.
Full source: github.com/ThreeDotsLabs/watermill/_examples/basic/3-router/main.go
// ...
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/ThreeDotsLabs/watermill"
"github.com/ThreeDotsLabs/watermill/message"
"github.com/ThreeDotsLabs/watermill/message/router/middleware"
"github.com/ThreeDotsLabs/watermill/message/router/plugin"
"github.com/ThreeDotsLabs/watermill/pubsub/gochannel"
)
var (
// For this example, we're using just a simple logger implementation,
// You probably want to ship your own implementation of `watermill.LoggerAdapter`.
logger = watermill.NewStdLogger(false, false)
)
func main() {
router, err := message.NewRouter(message.RouterConfig{}, logger)
if err != nil {
panic(err)
}
// SignalsHandler will gracefully shutdown Router when SIGTERM is received.
// You can also close the router by just calling `r.Close()`.
router.AddPlugin(plugin.SignalsHandler)
router.AddMiddleware(
// CorrelationID will copy the correlation id from the incoming message's metadata to the produced messages
middleware.CorrelationID,
// The handler function is retried if it returns an error.
// After MaxRetries, the message is Nacked and it's up to the PubSub to resend it.
middleware.Retry{
MaxRetries: 3,
InitialInterval: time.Millisecond * 100,
Logger: logger,
}.Middleware,
// Recoverer handles panics from handlers.
// In this case, it passes them as errors to the Retry middleware.
middleware.Recoverer,
)
// For simplicity, we are using the gochannel Pub/Sub here,
// You can replace it with any Pub/Sub implementation, it will work the same.
pubSub := gochannel.NewGoChannel(gochannel.Config{}, logger)
// Producing some incoming messages in background
go publishMessages(pubSub)
router.AddHandler(
"struct_handler", // handler name, must be unique
"incoming_messages_topic", // topic from which we will read events
pubSub,
"outgoing_messages_topic", // topic to which we will publish events
pubSub,
structHandler{}.Handler,
)
// just for debug, we are printing all messages received on `incoming_messages_topic`
router.AddNoPublisherHandler(
"print_incoming_messages",
"incoming_messages_topic",
pubSub,
printMessages,
)
// just for debug, we are printing all events sent to `outgoing_messages_topic`
router.AddNoPublisherHandler(
"print_outgoing_messages",
"outgoing_messages_topic",
pubSub,
printMessages,
)
// Now that all handlers are registered, we're running the Router.
// Run is blocking while the router is running.
ctx := context.Background()
if err := router.Run(ctx); err != nil {
panic(err)
}
}
// ...
Incoming messages
The struct_handler
consumes messages from incoming_messages_topic
, so we are simulating incoming traffic by calling publishMessages()
in the background.Notice that we’ve added the SetCorrelationID
middleware. A Correlation ID will be added to all messages produced by the router (it will be stored in metadata).
Full source: github.com/ThreeDotsLabs/watermill/_examples/basic/3-router/main.go
// ...
func publishMessages(publisher message.Publisher) {
for {
msg := message.NewMessage(watermill.NewUUID(), []byte("Hello, world!"))
middleware.SetCorrelationID(watermill.NewUUID(), msg)
log.Printf("sending message %s, correlation id: %s\n", msg.UUID, middleware.MessageCorrelationID(msg))
if err := publisher.Publish("incoming_messages_topic", msg); err != nil {
panic(err)
}
time.Sleep(time.Second)
}
}
// ...
Handlers
You may have noticed that there are two types of handler functions:
- function
func(msg message.Message) ([]message.Message, error)
- method
func (c structHandler) Handler(msg message.Message) ([]message.Message, error)
If your handler is a function without any dependencies, it’s fine to use the first one.The second option is useful when your handler requires some dependencies like database handle, a logger, etc.
Full source: github.com/ThreeDotsLabs/watermill/_examples/basic/3-router/main.go
// ...
func printMessages(msg *message.Message) error {
fmt.Printf(
"\n> Received message: %s\n> %s\n> metadata: %v\n\n",
msg.UUID, string(msg.Payload), msg.Metadata,
)
return nil
}
type structHandler struct {
// we can add some dependencies here
}
func (s structHandler) Handler(msg *message.Message) ([]*message.Message, error) {
log.Println("structHandler received message", msg.UUID)
msg = message.NewMessage(watermill.NewUUID(), []byte("message produced by structHandler"))
return message.Messages{msg}, nil
}
Done!
You can run this example by go run main.go
.
You’ve just created your first application with Watermill. You can find the full source in /_examples/basic/3-router/main.go.
Logging
To see Watermill’s logs, you have to pass any logger that implements the LoggerAdapter.For experimental development, you can use NewStdLogger
.
Testing
Watermill provides a set of test scenariosthat any Pub/Sub implementation can use. Each test suite needs to declare what features it supports and how to construct a new Pub/Sub.These scenarios check both basic usage and more uncommon use cases. Stress tests are also included.
Deployment
Watermill is not a framework. We don’t enforce any type of deployment and it’s totally up to you.
What’s next?
For more detailed documentation check documentation topics.
Examples
Check out the examples that will show you how to start using Watermill.
The recommended entry point is Your first Watermill application.It contains the entire environment in docker-compose.yml
, including Golang and Kafka, which you can run with one command.
After that, you can see the Realtime feed example.It uses more middlewares and contains two handlers. There is also a separate application for publishing messages.
For a different subscriber implementation, namely HTTP, refer to the receiving-webhooks example. It is a very simple application that saves webhooks to Kafka.
Full list of examples can be found in the project’s README.
Support
If anything is not clear, feel free to use any of our support channels, we will be glad to help.