Versioned Migrations
Quick Guide
Here are a few quick steps that explain how to auto-generate and execute migration files against a database. For a more in-depth explanation, continue reading the next section.
Generating migrations
To install the latest release of Atlas, simply run one of the following commands in your terminal, or check out the Atlas website:
- macOS + Linux
- Homebrew
- Docker
- Windows
curl -sSf https://atlasgo.sh | sh
brew install ariga/tap/atlas
docker pull arigaio/atlas
docker run --rm arigaio/atlas --help
If the container needs access to the host network or a local directory, use the --net=host
flag and mount the desired directory:
docker run --rm --net=host \
-v $(pwd)/migrations:/migrations \
arigaio/atlas migrate apply
--url "mysql://root:pass@:3306/test"
Download the latest release and move the atlas binary to a file location on your system PATH.
Then, run the following command to automatically generate migration files for your Ent schema:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "docker://mysql/8/ent"
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "docker://mariadb/latest/test"
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "docker://postgres/15/test?search_path=public"
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "sqlite://file?mode=memory&_fk=1"
The role of the dev database Atlas loads the current state by executing the SQL files stored in the migration directory onto the provided dev database. It then compares this state against the desired state defined by the ent/schema
package and writes a migration plan for moving from the current state to the desired state. :::
Applying migrations
To apply the pending migration files onto the database, run the following command:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "mysql://root:pass@localhost:3306/example"
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "maria://root:pass@localhost:3306/example"
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "sqlite://file.db?_fk=1"
For more information head over to the Atlas documentation.
Migration status
Use the following command to get detailed information about the migration status of the connected database:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
atlas migrate status \
--dir "file://ent/migrate/migrations" \
--url "mysql://root:pass@localhost:3306/example"
atlas migrate status \
--dir "file://ent/migrate/migrations" \
--url "maria://root:pass@localhost:3306/example"
atlas migrate status \
--dir "file://ent/migrate/migrations" \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
atlas migrate status \
--dir "file://ent/migrate/migrations" \
--url "sqlite://file.db?_fk=1"
In Depth Guide
If you are using the Atlas migration engine, you are able to use the versioned migration workflow. Instead of applying the computed changes directly to the database, Atlas generates a set of migration files containing the necessary SQL statements to migrate the database. These files can then be edited to your needs and be applied by many existing migration tools, such as golang-migrate, Flyway, and Liquibase.
Generating Versioned Migration Files
Migration files are generated by computing the difference between two states. We call the state reflected by your Ent schema the desired state, and the current state is the last state of your schema before your most recent changes. There are two ways for Ent to determine the current state:
- Replay the existing migration directory and inspect the schema (default)
- Connect to an existing database and inspect the schema
We emphasize to use the first option, as it has the advantage of not having to connect to a production database to create a diff. In addition, this approach also works if you have multiple deployments in different migration states.
In order to automatically generate migration files, you can use one of the two approaches:
- Use Atlas
migrate diff
command against yourent/schema
package. - Enable the
sql/versioned-migration
feature flag and write a small migration generation script that uses Atlas as a package to generate the migration files.
Option 1: Use the atlas migrate diff
command
- MySQL
- MariaDB
- PostgreSQL
- SQLite
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "docker://mysql/8/ent"
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "docker://mariadb/latest/test"
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "docker://postgres/15/test?search_path=public"
atlas migrate diff migration_name \
--dir "file://ent/migrate/migrations" \
--to "ent://ent/schema" \
--dev-url "sqlite://file?mode=memory&_fk=1"
Run ls ent/migrate/migrations
after the command above was passed successfully, and you will notice Atlas created 2 files:
- 20220811114629_create_users.sql
- atlas.sum
-- create "users" table
CREATE TABLE `users` (`id` bigint NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`)) CHARSET utf8mb4 COLLATE utf8mb4_bin;
In addition to the migration directory, Atlas maintains a file name atlas.sum
which is used to ensure the integrity of the migration directory and force developers to deal with situations where migration order or contents were modified after the fact.
h1:vj6fBSDiLEwe+jGdHQvM2NU8G70lAfXwmI+zkyrxMnk=
20220811114629_create_users.sql h1:wrm4K8GSucW6uMJX7XfmfoVPhyzz3vN5CnU1mam2Y4c=
Head over to the Applying Migration Files section to learn how to execute the generated migration files onto the database.
Option 2: Create a migration generation script
The first step is to enable the versioned migration feature by passing in the sql/versioned-migration
feature flag. Depending on how you execute the Ent code generator, you have to use one of the two options:
- Using Ent CLI
- Using the entc package
If you are using the default go generate configuration, simply add the --feature sql/versioned-migration
to the ent/generate.go
file as follows:
package ent
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate --feature sql/versioned-migration ./schema
If you are using the code generation package (e.g. if you are using an Ent extension like entgql
), add the feature flag as follows:
//go:build ignore
package main
import (
"log"
"entgo.io/ent/entc"
"entgo.io/ent/entc/gen"
)
func main() {
err := entc.Generate("./schema", &gen.Config{
Features: []gen.Feature{gen.FeatureVersionedMigration},
})
if err != nil {
log.Fatalf("running ent codegen: %v", err)
}
}
After running code generation using go generate
, the new methods for creating migration files were added to your ent/migrate
package. The next steps are:
- Provide a URL to an Atlas dev database to replay the migration directory and compute the current state. Let’s use
docker
for running a local database container:
- MySQL
- MariaDB
- PostgreSQL
docker run --name migration --rm -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=test -d mysql
docker run --name migration --rm -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=test -d mariadb
docker run --name migration --rm -p 5432:5432 -e POSTGRES_PASSWORD=pass -e POSTGRES_DB=test -d postgres
- Create a file named
main.go
and a directory namedmigrations
under theent/migrate
package and customize the migration generation for your project.
- Atlas
- golang-migrate/migrate
- pressly/goose
- amacneil/dbmate
- Flyway
- Liquibase
ent/migrate/main.go
//go:build ignore
package main
import (
"context"
"log"
"os"
"<project>/ent/migrate"
atlas "ariga.io/atlas/sql/migrate"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
ctx := context.Background()
// Create a local migration directory able to understand Atlas migration file format for replay.
dir, err := atlas.NewLocalDir("ent/migrate/migrations")
if err != nil {
log.Fatalf("failed creating atlas migration directory: %v", err)
}
// Migrate diff options.
opts := []schema.MigrateOption{
schema.WithDir(dir), // provide migration directory
schema.WithMigrationMode(schema.ModeReplay), // provide migration mode
schema.WithDialect(dialect.MySQL), // Ent dialect to use
schema.WithFormatter(atlas.DefaultFormatter),
}
if len(os.Args) != 2 {
log.Fatalln("migration name is required. Use: 'go run -mod=mod ent/migrate/main.go <name>'")
}
// Generate migrations using Atlas support for MySQL (note the Ent dialect option passed above).
err = migrate.NamedDiff(ctx, "mysql://root:pass@localhost:3306/test", os.Args[1], opts...)
if err != nil {
log.Fatalf("failed generating migration file: %v", err)
}
}
ent/migrate/main.go
//go:build ignore
package main
import (
"context"
"log"
"os"
"<project>/ent/migrate"
"ariga.io/atlas/sql/sqltool"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
ctx := context.Background()
// Create a local migration directory able to understand golang-migrate migration file format for replay.
dir, err := sqltool.NewGolangMigrateDir("ent/migrate/migrations")
if err != nil {
log.Fatalf("failed creating atlas migration directory: %v", err)
}
// Migrate diff options.
opts := []schema.MigrateOption{
schema.WithDir(dir), // provide migration directory
schema.WithMigrationMode(schema.ModeReplay), // provide migration mode
schema.WithDialect(dialect.MySQL), // Ent dialect to use
}
if len(os.Args) != 2 {
log.Fatalln("migration name is required. Use: 'go run -mod=mod ent/migrate/main.go <name>'")
}
// Generate migrations using Atlas support for MySQL (note the Ent dialect option passed above).
err = migrate.NamedDiff(ctx, "mysql://root:pass@localhost:3306/test", os.Args[1], opts...)
if err != nil {
log.Fatalf("failed generating migration file: %v", err)
}
}
ent/migrate/main.go
//go:build ignore
package main
import (
"context"
"log"
"os"
"<project>/ent/migrate"
"ariga.io/atlas/sql/sqltool"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
ctx := context.Background()
// Create a local migration directory able to understand goose migration file format for replay.
dir, err := sqltool.NewGooseDir("ent/migrate/migrations")
if err != nil {
log.Fatalf("failed creating atlas migration directory: %v", err)
}
// Migrate diff options.
opts := []schema.MigrateOption{
schema.WithDir(dir), // provide migration directory
schema.WithMigrationMode(schema.ModeReplay), // provide migration mode
schema.WithDialect(dialect.MySQL), // Ent dialect to use
}
if len(os.Args) != 2 {
log.Fatalln("migration name is required. Use: 'go run -mod=mod ent/migrate/main.go <name>'")
}
// Generate migrations using Atlas support for MySQL (note the Ent dialect option passed above).
err = migrate.NamedDiff(ctx, "mysql://root:pass@localhost:3306/test", os.Args[1], opts...)
if err != nil {
log.Fatalf("failed generating migration file: %v", err)
}
}
ent/migrate/main.go
//go:build ignore
package main
import (
"context"
"log"
"os"
"<project>/ent/migrate"
"ariga.io/atlas/sql/sqltool"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
ctx := context.Background()
// Create a local migration directory able to understand dbmate migration file format for replay.
dir, err := sqltool.NewDBMateDir("ent/migrate/migrations")
if err != nil {
log.Fatalf("failed creating atlas migration directory: %v", err)
}
// Migrate diff options.
opts := []schema.MigrateOption{
schema.WithDir(dir), // provide migration directory
schema.WithMigrationMode(schema.ModeReplay), // provide migration mode
schema.WithDialect(dialect.MySQL), // Ent dialect to use
}
if len(os.Args) != 2 {
log.Fatalln("migration name is required. Use: 'go run -mod=mod ent/migrate/main.go <name>'")
}
// Generate migrations using Atlas support for MySQL (note the Ent dialect option passed above).
err = migrate.NamedDiff(ctx, "mysql://root:pass@localhost:3306/test", os.Args[1], opts...)
if err != nil {
log.Fatalf("failed generating migration file: %v", err)
}
}
ent/migrate/main.go
//go:build ignore
package main
import (
"context"
"log"
"os"
"<project>/ent/migrate"
"ariga.io/atlas/sql/sqltool"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
ctx := context.Background()
// Create a local migration directory able to understand Flyway migration file format for replay.
dir, err := sqltool.NewFlywayDir("ent/migrate/migrations")
if err != nil {
log.Fatalf("failed creating atlas migration directory: %v", err)
}
// Migrate diff options.
opts := []schema.MigrateOption{
schema.WithDir(dir), // provide migration directory
schema.WithMigrationMode(schema.ModeReplay), // provide migration mode
schema.WithDialect(dialect.MySQL), // Ent dialect to use
}
if len(os.Args) != 2 {
log.Fatalln("migration name is required. Use: 'go run -mod=mod ent/migrate/main.go <name>'")
}
// Generate migrations using Atlas support for MySQL (note the Ent dialect option passed above).
err = migrate.NamedDiff(ctx, "mysql://root:pass@localhost:3306/test", os.Args[1], opts...)
if err != nil {
log.Fatalf("failed generating migration file: %v", err)
}
}
ent/migrate/main.go
//go:build ignore
package main
import (
"context"
"log"
"os"
"<project>/ent/migrate"
"ariga.io/atlas/sql/sqltool"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
ctx := context.Background()
// Create a local migration directory able to understand Liquibase migration file format for replay.
dir, err := sqltool.NewLiquibaseDir("ent/migrate/migrations")
if err != nil {
log.Fatalf("failed creating atlas migration directory: %v", err)
}
// Migrate diff options.
opts := []schema.MigrateOption{
schema.WithDir(dir), // provide migration directory
schema.WithMigrationMode(schema.ModeReplay), // provide migration mode
schema.WithDialect(dialect.MySQL), // Ent dialect to use
}
if len(os.Args) != 2 {
log.Fatalln("migration name is required. Use: 'go run -mod=mod ent/migrate/main.go <name>'")
}
// Generate migrations using Atlas support for MySQL (note the Ent dialect option passed above).
err = migrate.NamedDiff(ctx, "mysql://root:pass@localhost:3306/test", os.Args[1], opts...)
if err != nil {
log.Fatalf("failed generating migration file: %v", err)
}
}
- Trigger migration generation by executing
go run -mod=mod ent/migrate/main.go <name>
from the root of the project. For example:
go run -mod=mod ent/migrate/main.go create_users
Run ls ent/migrate/migrations
after the command above was passed successfully, and you will notice Atlas created 2 files:
- 20220811114629_create_users.sql
- atlas.sum
-- create "users" table
CREATE TABLE `users` (`id` bigint NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`)) CHARSET utf8mb4 COLLATE utf8mb4_bin;
In addition to the migration directory, Atlas maintains a file name atlas.sum
which is used to ensure the integrity of the migration directory and force developers to deal with situations where migration order or contents were modified after the fact.
h1:vj6fBSDiLEwe+jGdHQvM2NU8G70lAfXwmI+zkyrxMnk=
20220811114629_create_users.sql h1:wrm4K8GSucW6uMJX7XfmfoVPhyzz3vN5CnU1mam2Y4c=
The full reference example exists in GitHub repository.
Verifying and linting migrations
After generating our migration files with Atlas, we can run the atlas migrate lint command that validates and analyzes the contents of the migration directory and generate insights and diagnostics on the selected changes:
- Ensure the migration history can be replayed from any point at time.
- Protect from unexpected history changes when concurrent migrations are written to the migration directory by multiple team members. Read more about the consistency checks in the section below.
- Detect whether destructive or irreversible changes have been made or whether they are dependent on tables’ contents and can cause a migration failure.
Let’s run atlas migrate lint
with the necessary parameters to run migration linting:
--dev-url
a URL to a Dev Database that will be used to replay changes.--dir
the URL to the migration directory, by default it isfile://migrations
.--dir-format
custom directory format, by default it isatlas
.- (optional)
--log
custom logging using a Go template. - (optional)
--latest
run analysis on the latestN
migration files. - (optional)
--git-base
run analysis against the base Git branch.
Install Atlas:
To install the latest release of Atlas, simply run one of the following commands in your terminal, or check out the Atlas website:
- macOS + Linux
- Homebrew
- Docker
- Windows
curl -sSf https://atlasgo.sh | sh
brew install ariga/tap/atlas
docker pull arigaio/atlas
docker run --rm arigaio/atlas --help
If the container needs access to the host network or a local directory, use the --net=host
flag and mount the desired directory:
docker run --rm --net=host \
-v $(pwd)/migrations:/migrations \
arigaio/atlas migrate apply
--url "mysql://root:pass@:3306/test"
Download the latest release and move the atlas binary to a file location on your system PATH.
Run the atlas migrate lint
command:
- MySQL
- MariaDB
- PostgreSQL
- SQLite
atlas migrate lint \
--dev-url="docker://mysql/8/test" \
--dir="file://ent/migrate/migrations" \
--latest=1
atlas migrate lint \
--dev-url="docker://mariadb/latest/test" \
--dir="file://ent/migrate/migrations" \
--latest=1
atlas migrate lint \
--dev-url="docker://postgres/15/test?search_path=public" \
--dir="file://ent/migrate/migrations" \
--latest=1
atlas migrate lint \
--dev-url="sqlite://file?mode=memory" \
--dir="file://ent/migrate/migrations" \
--latest=1
An output of such a run might look as follows:
20221114090322_add_age.sql: data dependent changes detected:
L2: Adding a non-nullable "double" column "age" on table "users" without a default value implicitly sets existing rows with 0
20221114101516_add_name.sql: data dependent changes detected:
L2: Adding a non-nullable "varchar" column "name" on table "users" without a default value implicitly sets existing rows with ""
A Word on Global Unique IDs
This section only applies to MySQL users using the global unique id feature.
When using the global unique ids, Ent allocates a range of 1<<32
integer values for each table. This is done by giving the first table an autoincrement starting value of 1
, the second one the starting value 4294967296
, the third one 8589934592
, and so on. The order in which the tables receive the starting value is saved in an extra table called ent_types
. With MySQL 5.6 and 5.7, the autoincrement starting value is only saved in memory (docs, InnoDB AUTO_INCREMENT Counter Initialization header) and re-calculated on startup by looking at the last inserted id for any table. Now, if you happen to have a table with no rows yet, the autoincrement starting value is set to 0 for every table without any entries. With the online migration feature this wasn’t an issue, because the migration engine looked at the ent_types
tables and made sure to update the counter, if it wasn’t set correctly. However, with versioned migration, this is no longer the case. In order to ensure, that everything is set up correctly after a server restart, make sure to call the VerifyTableRange
method on the Atlas struct:
package main
import (
"context"
"log"
"<project>/ent"
"<project>/ent/migrate"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/schema"
_ "github.com/go-sql-driver/mysql"
)
func main() {
drv, err := sql.Open("mysql", "user:pass@tcp(localhost:3306)/ent")
if err != nil {
log.Fatalf("failed opening connection to mysql: %v", err)
}
defer drv.Close()
// Verify the type allocation range.
m, err := schema.NewMigrate(drv, nil)
if err != nil {
log.Fatalf("failed creating migrate: %v", err)
}
if err := m.VerifyTableRange(context.Background(), migrate.Tables); err != nil {
log.Fatalf("failed verifyint range allocations: %v", err)
}
client := ent.NewClient(ent.Driver(drv))
// ... do stuff with the client
}
Important
After an upgrade to MySQL 8 from a previous version, you still have to run the method once to update the starting values. Since MySQL 8 the counter is no longer only stored in memory, meaning subsequent calls to the method are no longer needed after the first one.
Apply Migration Files
Ent recommends to use the Atlas CLI to apply the generated migration files onto the database. If you want to use any other migration management tool, Ent has support for generating migrations for several of them out of the box.
- MySQL
- MariaDB
- PostgreSQL
- SQLite
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "mysql://root:pass@localhost:3306/example"
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "maria://root:pass@localhost:3306/example"
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"
atlas migrate apply \
--dir "file://ent/migrate/migrations" \
--url "sqlite://file.db?_fk=1"
For more information head over to the Atlas documentation.
info
In previous versions of Ent golang-migrate/migrate has been the default migration execution engine. For an easy transition, Atlas can import the migrations format of golang-migrate for you. You can learn more about it in the Atlas documentation.
Moving from Auto-Migration to Versioned Migrations
In case you already have an Ent application in production and want to switch over from auto migration to the new versioned migration, you need to take some extra steps.
Create an initial migration file reflecting the currently deployed state
To do this make sure your schema definition is in sync with your deployed version(s). Then spin up an empty database and run the diff command once as described above. This will create the statements needed to create the current state of your schema graph. If you happened to have universal IDs enabled before, any deployment will have a special database table named ent_types
. The above command will create the necessary SQL statements to create that table as well as its contents (similar to the following):
CREATE TABLE `users` (`id` integer NOT NULL PRIMARY KEY AUTOINCREMENT);
CREATE TABLE `groups` (`id` integer NOT NULL PRIMARY KEY AUTOINCREMENT);
INSERT INTO sqlite_sequence (name, seq) VALUES ("groups", 4294967296);
CREATE TABLE `ent_types` (`id` integer NOT NULL PRIMARY KEY AUTOINCREMENT, `type` text NOT NULL);
CREATE UNIQUE INDEX `ent_types_type_key` ON `ent_types` (`type`);
INSERT INTO `ent_types` (`type`) VALUES ('users'), ('groups');
In order to ensure to not break existing code, make sure the contents of that file are equal to the contents in the table present in the database you created the diff from. For example, if you consider the migration file from above (users,groups
) but your deployed table looks like the one below (groups,users
):
id | type |
---|---|
1 | groups |
2 | users |
You can see, that the order differs. In that case, you have to manually change both the entries in the generated migration file.
Use an Atlas Baseline Migration
If you are using Atlas as migration execution engine, you can then simply use the --baseline
flag. For other tools, please take a look at their respective documentation.
atlas migrate apply \
--dir "file://migrations"
--url mysql://root:pass@localhost:3306/ent
--baseline "<version>"
Atlas migration directory integrity file
The Problem
Suppose you have multiple teams develop a feature in parallel and both of them need a migration. If Team A and Team B do not check in with each other, they might end up with a broken set of migration files (like adding the same table or column twice) since new files do not raise a merge conflict in a version control system like git. The following example demonstrates such behavior:
Assume both Team A and Team B add a new schema called User and generate a versioned migration file on their respective branch.
20220318104614_team_A.sql
-- create "users" table
CREATE TABLE `users` (
`id` bigint NOT NULL AUTO_INCREMENT,
`team_a_col` INTEGER NOT NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_bin;
20220318104615_team_B.sql
-- create "users" table
CREATE TABLE `users` (
`id` bigint NOT NULL AUTO_INCREMENT,
`team_b_col` INTEGER NOT NULL,
PRIMARY KEY (`id`)
) CHARSET utf8mb4 COLLATE utf8mb4_bin;
If they both merge their branch into master, git will not raise a conflict and everything seems fine. But attempting to apply the pending migrations will result in migration failure:
mysql> CREATE TABLE `users` (`id` bigint NOT NULL AUTO_INCREMENT, `team_a_col` INTEGER NOT NULL, PRIMARY KEY (`id`)) CHARSET utf8mb4 COLLATE utf8mb4_bin;
[2022-04-14 10:00:38] completed in 31 ms
mysql> CREATE TABLE `users` (`id` bigint NOT NULL AUTO_INCREMENT, `team_b_col` INTEGER NOT NULL, PRIMARY KEY (`id`)) CHARSET utf8mb4 COLLATE utf8mb4_bin;
[2022-04-14 10:00:48] [42S01][1050] Table 'users' already exists
Depending on the SQL this can potentially leave your database in a crippled state.
The Solution
Luckily, the Atlas migration engine offers a way to prevent concurrent creation of new migration files and guard against accidental changes in the migration history we call Migration Directory Integrity File, which simply is another file in your migration directory called atlas.sum
. For the migration directory of team A it would look similar to this:
h1:KRFsSi68ZOarsQAJZ1mfSiMSkIOZlMq4RzyF//Pwf8A=
20220318104614_team_A.sql h1:EGknG5Y6GQYrc4W8e/r3S61Aqx2p+NmQyVz/2m8ZNwA=
The atlas.sum
file contains the checksum of each migration file (implemented by a reverse, one branch merkle hash tree), and a sum of all files. Adding new files results in a change to the sum file, which will raise merge conflicts in most version controls systems. Let’s see how we can use the Migration Directory Integrity File to detect the case from above automatically.
Please note, that you need to have the Atlas CLI installed in your system for this to work, so make sure to follow the installation instructions before proceeding. :::
In previous versions of Ent, the integrity file was opt-in. But we think this is a very important feature that provides great value and safety to migrations. Therefore, generation of the sum file is now the default behavior and in the future we might even remove the option to disable this feature. For now, if you really want to remove integrity file generation, use the schema.DisableChecksum()
option.
In addition to the usual .sql
migration files the migration directory will contain the atlas.sum
file. Every time you let Ent generate a new migration file, this file is updated for you. However, every manual change made to the migration directory will render the migration directory and the atlas.sum
file out-of-sync. With the Atlas CLI you can both check if the file and migration directory are in-sync, and fix it if not:
# If there is no output, the migration directory is in-sync.
atlas migrate validate --dir file://<path-to-your-migration-directory>
# If the migration directory and sum file are out-of-sync the Atlas CLI will tell you.
atlas migrate validate --dir file://<path-to-your-migration-directory>
Error: checksum mismatch
You have a checksum error in your migration directory.
This happens if you manually create or edit a migration file.
Please check your migration files and run
'atlas migrate hash'
to re-hash the contents and resolve the error.
exit status 1
If you are sure, that the contents in your migration files are correct, you can re-compute the hashes in the atlas.sum
file:
# Recompute the sum file.
atlas migrate hash --dir file://<path-to-your-migration-directory>
Back to the problem above, if team A would land their changes on master first and team B would now attempt to land theirs, they’d get a merge conflict, as you can see in the example below:
You can add the atlas migrate validate
call to your CI to have the migration directory checked continuously. Even if any team member would now forget to update the atlas.sum
file after a manual edit, the CI would not go green, indicating a problem.