CURRENTLY BEING UPDATED
Sponsor me on Patreon to support more content like this.
In the previous post, we covered some of the basics of go-micro and Docker. We also introduced a second service. In this post, we're going to look at docker-compose, and how we can run our services together locally a little easier. We're going to introduce some different databases, and finally we'll introduce a third service into the mix.
Prerequisites
Install docker-compose: https://docs.docker.com/compose/install/
But first, let's look at databases.
Choosing a database
So far our data isn't actually stored anywhere, it's stored in memory in our services, which is then lost when our containers are restarted. So of course we need a way of persisting, storing and querying our data.
The beauty of microservices, is that you can use different databases per service. Of course you don't have to do this, and many people don't. In fact I rarely do for small teams as it's a bit of a mental leap to maintain several different databases, than just one. But in some cases, one services data, might not fit the database you've used for your other services. So it makes sense to use something else. Microservices makes this trivially easy as your concerns are completely separate.
Choosing the 'correct' database for your services is an entirely different article, this one for example, so we wont go into too much detail on this subect. However, I will say that if you have fairly loose or inconsistent datasets, then a NoSQL document store solution is perfect. They're much more flexible with what you can store and work well with json. We'll be using MongoDB for our NoSQL database. No particular reason other than it performs well, it's widely used and supported and has a great online community.
If your data is more strictly defined and relational by nature, then it can makes sense to use a traditional rdbms, or relational database. But there really aren't any hard rules, generally any will do the job. But be sure to look at your data structure, consider whether your service is doing more reading or more writing, how complex the queries will be, and try to use those as a starting point in choosing your databases. For our relational database, we'll be using Postgres. Again, no particular reason other than it does the job well and I'm familiar with it. You could use MySQL, MariaDB, or something else.
Amazon and Google both have some fantastic on premises solution for both of these database types as well, if you wanted to avoid managing your own databases (generally advisable). Another great option is compose, who will spin up fully managed, scalable instances of various database technologies, using the same cloud provider as your services to avoid connection latency.
Amazon:RDBMS: https://aws.amazon.com/rds/NoSQL: https://aws.amazon.com/dynamodb/
Google:RDBMS: https://cloud.google.com/spanner/NoSQL: https://cloud.google.com/datastore/
Now that we've discussed databases a little, let's do some coding!
docker-compose
In the last part in the series we looked at Docker, which let us run our services in light-weight containers with their own run-times and dependencies. However, it's getting slightly cumbersome to have to run and manage each service with a separate Makefile. So let's take a look at docker-compose. Docker-compose allows you do define a list of docker containers in a yaml file, and specify metadata about their run-time. Docker-compose services map more or less to the same docker commands we're already using. For example:
$ docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 -e MICRO_REGISTRY=mdns vessel-service
Becomes:
version: '3.1'
services:
shippy-service-vessel:
build: ./shippy-service-vessel
ports:
- 50052:50051
environment:
MICRO_SERVER_ADDRESS: ":50051"
Easy!
So let's create a docker-compose file in the root of our directory $ touch docker-compose.yml
. Now add our services:
# docker-compose.yml
version: '3.1'
services:
shippy-cli-consignment:
build: ./shippy-cli-consignment
shippy-service-consignment:
build: ./shippy-service-consignment
ports:
- 50051:50051
environment:
MICRO_ADDRESS: ":50051"
DB_HOST: "datastore:27017"
shippy-service-vessel:
build: ./shippy-service-vessel
ports:
- 50052:50051
environment:
MICRO_ADDRESS: ":50051"
First we define the version of docker-compose we want to use, then a list of services. There are other root level definitions such as networks and volumes, but we'll just focus on services for now.
Each service is defined by its name, then we include a build
path, which is a reference to a location, which should contain a Dockerfile. This tells docker-compose to use this Dockerfile to build its image. You can also use image
here to use a pre-built image. Which we will be doing later on. Then you define your port mappings, and finally your environment variables.
To build your docker-compose stack, simply run $ docker-compose build
, and to run it, $ docker-compose run
. To run your stack in the background, use $ docker-compose up -d
. You can also view a list of your currently running containers at any point using $ docker ps
. Finally, you can stop all of your current containers by running $ docker stop $(docker ps -qa)
.
So let's run our stack. You should see lots of output and dockerfile's being built. You may also see an error from our CLI tool, but don't worry about that, it's mostly likely because it's ran prior to our other services. It's simply saying that it can't find them yet.
Let's test it all worked by running our CLI tool. To run it through docker-compose, simply run $ docker-compose run consignment-cli
once all of the other containers are running. You should see it run successfully, just as before.
Entities and protobufs
Throughout this series we've spoken of protobufs being the very center of our data model. We've used it to define our services structure and functionality. Because protobuf generates structs with more or less all of the correct data types, we can also re-use these structs as our underlying database models. This is actually pretty mind-blowing. It keeps in-line with the protobuf being the single source of truth.
However this approach does have its down-sides. Sometimes its tricky to marshal the code generated by protobuf into a valid database entity. Sometimes database technologies use custom types which are tricky to translate from the native types generated by protobuf. One problem I spent many many hours thinking about was how I could convert Id string
to and from Id bson.ObjectId
for Mongodb entities. It turns out that bson.ObjectId, is really just a string anyway, so you can marshal them together. Also, mongodb's id index is stored as _id
internally, so you need a way to tie that to your Id string
field as you can't really do _Id string
. Which means finding a way to define custom tags for your protobuf files. But we'll get to that later.
Also, many people often argue against using your protobuf definitions as your database entity because you're tightly coupling your communication technology to your database code. Which is also a valid point.
Generally it's advised to convert between your protobuf definition code and your database entities. However, you end up with a lot of conversion code for converting two almost identical types, for example:
func (service *Service) (ctx context.Context, req *proto.User, res *proto.Response) error {
entity := &models.User{
Name: req.Name.
Email: req.Email,
Password: req.Password,
}
err := service.repo.Create(entity)
...
}
Which on the surface doesn't seem all that bad, but when you've got several nested structs, and several types. It can be really tedious, and can involve a lot of iteration to convert between nested structs etc.
This approach is really down to you though, like many things in programming, this doesn't come down to a right or wrong. So take whichever approach feels most appropriate to you. But, my own personal opinion is that converting between two almost identical types, especially given we're treating our protobuf code as the basis of our data, feels like a detraction from the benefits we've attained from using protobufs as your core definition. So I will be using our protobuf code for our database. By the way, I'm not saying I'm right on this, and I'm desperate to hear your opinions on this.
Let's start hooking up our first service, our consignment service. I feel as though we should do some tidying up first. We've lumped everything into our main.go
file. I know these are microservices, but that's no excuse to be messy! So let's create two more files in shippy-service-consignment
, handler.go
, datastore.go
, and repository.go
. I'm creating these within the root of our service, rather than creating them as new packages and directories. This is perfectly adequate for a small microservice. It's a common temptation for developers to create a structure like this:
main.go
models/
user.go
handlers/
auth.go
user.go
services/
auth.go
This harks back to the MVC days, and isn't really advised in Golang. Certainly not for smaller projects. If you had a bigger project with multiple concerns, you could organise it as followed:
main.go
users/
services/
auth.go
handlers/
auth.go
user.go
users/
user.go
containers/
services/
manage.go
models/
container.go
Here you're grouping your code by domain, rather than arbitrarily grouping your code by what it does.
However, as we're dealing with a microservice, which should only really be dealing with a single concern, we don't need to take either of the above approaches. In fact, Go's ethos is to encourage simplicity. So we'll start simple and house everything in the root of our service, with some clearly defined file names.
As a side note, we'll need to update our Dockerfile's, as we're not importing our new separated code as packages, we will need to tell the go compiler to pull in these new files. So update the build function to look like this:
RUN CGO_ENABLED=0 GOOS=linux go build -o shippy-service-consignment -a -installsuffix cgo main.go repository.go handler.go datastore.go
This will include the new files we'll be creating.
The MongoDB Golang lib is a great example of this simplicity and finally on this, here's a great article on organising Go codebases.
Let's start by removing all of the repository code from our main.go and re-purpose it to use the mongodb library, mgo. Once again, I've tried to comment the code to explain what each part does, so please read the code and comments thoroughly. Especially the part around how mgo handles sessions:
// shippy-service-consignment/repository.go
package main
import (
"context"
pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment"
"go.mongodb.org/mongo-driver/mongo"
)
type repository interface {
Create(consignment *pb.Consignment) error
GetAll() ([]*pb.Consignment, error)
}
// MongoRepository implementation
type MongoRepository struct {
collection *mongo.Collection
}
// Create -
func (repository *MongoRepository) Create(consignment *pb.Consignment) error {
_, err := repository.collection.InsertOne(context.Background(), consignment)
return err
}
// GetAll -
func (repository *MongoRepository) GetAll() ([]*pb.Consignment, error) {
cur, err := repository.collection.Find(context.Background(), nil, nil)
var consignments []*pb.Consignment
for cur.Next(context.Background()) {
var consignment *pb.Consignment
if err := cur.Decode(&consignment); err != nil {
return nil, err
}
consignments = append(consignments, consignment)
}
return consignments, err
}
So there we have our code responsible for interacting with our Mongodb database. We'll need to create the code that creates the master session/connection. Update consignment-service/datastore.go
with the following:
// shippy-service-consignment/datastore.go
package main
import (
"context"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"time"
)
// CreateClient -
func CreateClient(uri string) (*mongo.Client, error) {
ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)
return mongo.Connect(ctx, options.Client().ApplyURI(uri))
}
That's it, pretty straight forward. It takes a host string as an argument, returns a session to our datastore and of course a potential error, so that we can handle that on start-up. Let's modify our main.go file to hook this up to our repository:
// shippy-service-consignment/main.go
package main
import (
"context"
"fmt"
pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment"
vesselProto "github.com/EwanValentine/shippy-service-vessel/proto/vessel"
"github.com/micro/go-micro"
"log"
"os"
)
const (
port = ":50051"
defaultHost = "datastore:27017"
)
func main() {
// Set-up micro instance
srv := micro.NewService(
micro.Name("shippy.service.consignment"),
)
srv.Init()
uri := os.Getenv("DB_HOST")
if uri == "" {
uri = defaultHost
}
client, err := CreateClient(uri)
if err != nil {
log.Panic(err)
}
defer client.Disconnect(context.TODO())
consignmentCollection := client.Database("shippy").Collection("consignments")
repository := &MongoRepository{consignmentCollection}
vesselClient := vesselProto.NewVesselServiceClient("shippy.service.client", srv.Client())
h := &handler{repository, vesselClient}
// Register handlers
pb.RegisterShippingServiceHandler(srv.Server(), h)
// Run the server
if err := srv.Run(); err != nil {
fmt.Println(err)
}
}
The final bit of tidying up we need to do is to move our gRPC handler code out into our new handler.go
file. So let's do that.
// shippy-service-consignment/handler.go
package main
import (
"context"
"log"
pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment"
vesselProto "github.com/EwanValentine/shippy-service-vessel/proto/vessel"
)
type handler struct {
repository
vesselClient vesselProto.VesselServiceClient
}
// CreateConsignment - we created just one method on our service,
// which is a create method, which takes a context and a request as an
// argument, these are handled by the gRPC server.
func (s *handler) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {
// Here we call a client instance of our vessel service with our consignment weight,
// and the amount of containers as the capacity value
vesselResponse, err := s.vesselClient.FindAvailable(ctx, &vesselProto.Specification{
MaxWeight: req.Weight,
Capacity: int32(len(req.Containers)),
})
log.Printf("Found vessel: %s \n", vesselResponse.Vessel.Name)
if err != nil {
return err
}
// We set the VesselId as the vessel we got back from our
// vessel service
req.VesselId = vesselResponse.Vessel.Id
// Save our consignment
if err = s.repository.Create(req); err != nil {
return err
}
res.Created = true
res.Consignment = req
return nil
}
// GetConsignments -
func (s *handler) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
consignments, err := s.repository.GetAll()
if err != nil {
return err
}
res.Consignments = consignments
return nil
}
We've updated some of the return arguments in our repo slightly from the last tutorial:Old:
type Repository interface {
Create(*pb.Consignment) (*pb.Consignment, error)
GetAll() []*pb.Consignment
}
New:
type Repository interface {
Create(*pb.Consignment) error
GetAll() ([]*pb.Consignment, error)
}
This is just because I felt we didn't need to return the same consignment after creating it. And now we're returning a proper error from mgo for our get query. Otherwise the code is more or less the same.
Now let's do the same to your vessel-service. I'm not going to demonstrate this in this post, you should have a good feel for it yourself at this point. Remember you can use my repository as a reference.
We will however add a new method to our vessel-service, which will allow us to create new vessels. As ever, let's start by updating our protobuf definition:
syntax = "proto3";
package vessel;
service VesselService {
rpc FindAvailable(Specification) returns (Response) {}
rpc Create(Vessel) returns (Response) {}
}
message Vessel {
string id = 1;
int32 capacity = 2;
int32 max_weight = 3;
string name = 4;
bool available = 5;
string owner_id = 6;
}
message Specification {
int32 capacity = 1;
int32 max_weight = 2;
}
message Response {
Vessel vessel = 1;
repeated Vessel vessels = 2;
bool created = 3;
}
We created a new Create
method under our gRPC service, which takes a vessel and returns our generic response. We've added a new field to our response message as well, just a created
bool. Run $ make build
to update this service. Now we'll add a new handler in vessel-service/handler.go
and a new repository method:
@todo - update
// vessel-service/handler.go
func (s *service) Create(ctx context.Context, req *pb.Vessel, res *pb.Response) error {
if err := s.repository.Create(req); err != nil {
return err
}
res.Vessel = req
res.Created = true
return nil
}
// vessel-service/repository.go
func (repository *VesselRepository) Create(vessel *pb.Vessel) error {
return repository.collection.Insert(vessel)
}
Now we can create vessels! I've update the main.go to use our new Create method to store our dummy data, see here.
So after all of that. We have updated our services to use Mongodb. Before we try to run this, we will need to update our docker-compose
file to include a Mongodb container:
services:
...
datastore:
image: mongo
ports:
- 27017:27017
And update the environment variables in your two services to include: DB_HOST: "datastore:27017"
. Notice, we're calling datastore
as our host name, and not localhost
for example. This is because docker-compose handles some clever internal DNS stuff for us.
So you should have:
version: '3.1'
services:
consignment-cli:
build: ./consignment-cli
consignment-service:
build: ./shippy-service-consignment
ports:
- 50051:50051
environment:
MICRO_ADDRESS: ":50051"
DB_HOST: "datastore:27017"
vessel-service:
build: ./shippy-service-vessel
ports:
- 50052:50051
environment:
MICRO_ADDRESS: ":50051"
DB_HOST: "datastore:27017"
datastore:
image: mongo
ports:
- 27017:27017
Re-build your stack $ docker-compose build
and re-run it $ docker-compose up
. Note, sometimes because of Dockers caching, you may need to run a cacheless build to pick up certain changes. To do this in docker-compose, simply use the —no-cache
flag when running $ docker-compose build
.
User service
Now let's create a third service. We'll start by updating our docker-compose.yml
file. Also, to mix things up a bit, we'll add Postgres to our docker stack for our user service:
...
user-service:
build: ./shippy-service-user
ports:
- 50053:50051
environment:
MICRO_ADDRESS: ":50051"
...
database:
image: postgres
ports:
- 5432:5432
Now create a user-service
directory in your root. And, as per the previous services. Create the following files: handler.go, main.go, repository.go, database.go, Dockerfile, Makefile, a sub-directory for our proto files, and finally the proto file itself: proto/user/user.proto
.
Add the following to user.proto
:
syntax = "proto3";
package user;
service UserService {
rpc Create(User) returns (Response) {}
rpc Get(User) returns (Response) {}
rpc GetAll(Request) returns (Response) {}
rpc Auth(User) returns (Token) {}
rpc ValidateToken(Token) returns (Token) {}
}
message User {
string id = 1;
string name = 2;
string company = 3;
string email = 4;
string password = 5;
}
message Request {}
message Response {
User user = 1;
repeated User users = 2;
repeated Error errors = 3;
}
message Token {
string token = 1;
bool valid = 2;
repeated Error errors = 3;
}
message Error {
int32 code = 1;
string description = 2;
}
Now, ensuring you've created a Makefile similar to that of our previous services, you should be able to run $ make build
to generate our gRPC code. As per our previous services, we've created some code to interface our gRPC methods. We're only going to make a few of them work in this part of the series. We just want to be able to create and fetch a user. In the next part of the series, we'll be looking at authentication and JWT. So we'll be leaving anything token related for now. Your handlers should look like this:
// @todo - update
// user-service/handler.go
package main
import (
"golang.org/x/net/context"
pb "github.com/EwanValentine/shippy/user-service/proto/user"
)
type service struct {
repo Repository
tokenService Authable
}
func (srv *service) Get(ctx context.Context, req *pb.User, res *pb.Response) error {
user, err := srv.repo.Get(req.Id)
if err != nil {
return err
}
res.User = user
return nil
}
func (srv *service) GetAll(ctx context.Context, req *pb.Request, res *pb.Response) error {
users, err := srv.repo.GetAll()
if err != nil {
return err
}
res.Users = users
return nil
}
func (srv *service) Auth(ctx context.Context, req *pb.User, res *pb.Token) error {
user, err := srv.repo.GetByEmailAndPassword(req)
if err != nil {
return err
}
res.Token = "testingabc"
return nil
}
func (srv *service) Create(ctx context.Context, req *pb.User, res *pb.Response) error {
if err := srv.repo.Create(req); err != nil {
return err
}
res.User = req
return nil
}
func (srv *service) ValidateToken(ctx context.Context, req *pb.Token, res *pb.Token) error {
return nil
}
Now let's add our repository code:
// user-service/repository.go
package main
import (
pb "github.com/EwanValentine/shippy/user-service/proto/user"
"github.com/jinzhu/gorm"
)
type Repository interface {
GetAll() ([]*pb.User, error)
Get(id string) (*pb.User, error)
Create(user *pb.User) error
GetByEmailAndPassword(user *pb.User) (*pb.User, error)
}
type UserRepository struct {
db *gorm.DB
}
func (repo *UserRepository) GetAll() ([]*pb.User, error) {
var users []*pb.User
if err := repo.db.Find(&users).Error; err != nil {
return nil, err
}
return users, nil
}
func (repo *UserRepository) Get(id string) (*pb.User, error) {
var user *pb.User
user.Id = id
if err := repo.db.First(&user).Error; err != nil {
return nil, err
}
return user, nil
}
func (repo *UserRepository) GetByEmailAndPassword(user *pb.User) (*pb.User, error) {
if err := repo.db.First(&user).Error; err != nil {
return nil, err
}
return user, nil
}
func (repo *UserRepository) Create(user *pb.User) error {
if err := repo.db.Create(user).Error; err != nil {
return err
}
}
We also need to change our ORM's behaviour to generate a UUID on creation, instead of trying to generate an integer ID. In case you didn't know, a UUID is a randomly generated set of hyphenated strings, used as an ID or primary key. This is more secure than just using auto-incrementing ID's, because it stops people from guessing or traversing through your API endpoints. MongoDB already uses a variation of this, but we need to tell our Postgres models to use UUID's. So in user-service/proto/user
create a new file called extensions.go
, in that file, add:
package user
import (
"github.com/jinzhu/gorm"
"github.com/satori/go.uuid"
)
func (model *User) BeforeCreate(scope *gorm.Scope) error {
uuid := uuid.NewV4()
return scope.SetColumn("Id", uuid.String())
}
This hooks into GORM's event lifecycle so that we generate a UUID for our Id column, before the entity is saved.
You'll notice here, unlike our Mongodb services, we're not doing any connection handling. The native, SQL/postgres drivers work slightly differently, so we don't need to worry about that this time. We're using a package called 'gorm', let's touch on this briefly.
Gorm - Go + ORM
Gorm is a reasonably light-weight object relational mapper, which works nicely with Postgres, MySQL, Sqlite etc. It's very easy to set-up, use and manages your database schema changes automatically.
That being said, with microservices, your data structures are much smaller, contain less joins and overall complexity. So don't feel as though you should use an ORM of any kind.
We need to be able to test creating a user, so let's create another cli tool. This time user-cli
in our project root. Similar as our consignment-cli, but this time:
package main
import (
"log"
"os"
pb "github.com/EwanValentine/shippy/user-service/proto/user"
microclient "github.com/micro/go-micro/client"
"github.com/micro/go-micro/cmd"
"golang.org/x/net/context"
"github.com/micro/cli"
"github.com/micro/go-micro"
)
func main() {
cmd.Init()
// Create new greeter client
client := pb.NewUserServiceClient("go.micro.srv.user", microclient.DefaultClient)
// Define our flags
service := micro.NewService(
micro.Flags(
cli.StringFlag{
Name: "name",
Usage: "You full name",
},
cli.StringFlag{
Name: "email",
Usage: "Your email",
},
cli.StringFlag{
Name: "password",
Usage: "Your password",
},
cli.StringFlag{
Name: "company",
Usage: "Your company",
},
),
)
// Start as service
service.Init(
micro.Action(func(c *cli.Context) {
name := c.String("name")
email := c.String("email")
password := c.String("password")
company := c.String("company")
// Call our user service
r, err := client.Create(context.TODO(), &pb.User{
Name: name,
Email: email,
Password: password,
Company: company,
})
if err != nil {
log.Fatalf("Could not create: %v", err)
}
log.Printf("Created: %s", r.User.Id)
getAll, err := client.GetAll(context.Background(), &pb.Request{})
if err != nil {
log.Fatalf("Could not list users: %v", err)
}
for _, v := range getAll.Users {
log.Println(v)
}
os.Exit(0)
}),
)
// Run the server
if err := service.Run(); err != nil {
log.Println(err)
}
}
Here we've used go-micro's command line helper, which is really neat.
We can run this and create a user:
$ docker-compose run user-cli command \
--name="Ewan Valentine" \
--email="[email protected]" \
--password="Testing123" \
--company="BBC"
And you should see the created user in a list!
This isn't very secure, as currently we're storing plain-text passwords, but in the next part of the series, we'll be looking at authentication and JWT tokens across our services.
So there we have it, we've created an additional service, an additional command line tool, and we've started to persist our data using two different database technologies. We've covered a lot of ground in this post, and apologies if we went over anything too quickly, covered too much or assumed too much knowledge. Please refer to the git repo and as ever, please do send me your feedback!
If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine
Or, sponsor me on Patreon to support more content like this.