Using Database Triggers in Ent Schema

Triggers are useful tools in relational databases that allow you to execute custom code when specific events occur on a table. For instance, triggers can automatically populate the audit log table whenever a new mutation is applied to a different table. This way we ensure that all changes (including those made by other applications) are meticulously recorded, enabling the enforcement on the database-level and reducing the need for additional code in the applications.

This guide explains how to attach triggers to your Ent types (objects) and configure the schema migration to manage both the triggers and the Ent schema as a single migration unit using Atlas.

Triggers - 图1Atlas Pro Feature

  1. atlas login

Install Atlas

To install the latest release of Atlas, simply run one of the following commands in your terminal, or check out the Atlas website:

  • macOS + Linux
  • Homebrew
  • Docker
  • Windows
  1. curl -sSf https://atlasgo.sh | sh
  1. brew install ariga/tap/atlas
  1. docker pull arigaio/atlas
  2. docker run --rm arigaio/atlas --help

If the container needs access to the host network or a local directory, use the --net=host flag and mount the desired directory:

  1. docker run --rm --net=host \
  2. -v $(pwd)/migrations:/migrations \
  3. arigaio/atlas migrate apply
  4. --url "mysql://root:pass@:3306/test"

Download the latest release and move the atlas binary to a file location on your system PATH.

Login to Atlas

  1. $ atlas login a8m
  2. You are now connected to "a8m" on Atlas Cloud.

Composite Schema

An ent/schema package is mostly used for defining Ent types (objects), their fields, edges and logic. Table triggers or any other database native objects do not have representation in Ent models. A trigger function can be defined once, and used in multiple triggers in different tables.

In order to extend our PostgreSQL schema to include both our Ent types and their triggers, we configure Atlas to read the state of the schema from a Composite Schema data source. Follow the steps below to configure this for your project:

  1. Let’s define a simple schema with two types (tables): users and user_audit_logs:

ent/schema/user.go

  1. // User holds the schema definition for the User entity.
  2. type User struct {
  3. ent.Schema
  4. }
  5. // Fields of the User.
  6. func (User) Fields() []ent.Field {
  7. return []ent.Field{
  8. field.String("name"),
  9. }
  10. }
  11. // UserAuditLog holds the schema definition for the UserAuditLog entity.
  12. type UserAuditLog struct {
  13. ent.Schema
  14. }
  15. // Fields of the UserAuditLog.
  16. func (UserAuditLog) Fields() []ent.Field {
  17. return []ent.Field{
  18. field.String("operation_type"),
  19. field.String("operation_time"),
  20. field.String("old_value").
  21. Optional(),
  22. field.String("new_value").
  23. Optional(),
  24. }
  25. }

Now, suppose we want to log every change to the users table and save it in the user_audit_logs table. To achieve this, we need to create a trigger function on INSERT, UPDATE and DELETE operations and attach it to the users table.

  1. Next step, we define a trigger function ( audit_users_changes) and attach it to the users table using the CREATE TRIGGER commands:

schema.sql

  1. -- Function to audit changes in the users table.
  2. CREATE OR REPLACE FUNCTION audit_users_changes()
  3. RETURNS TRIGGER AS $$
  4. BEGIN
  5. IF (TG_OP = 'INSERT') THEN
  6. INSERT INTO user_audit_logs(operation_type, operation_time, new_value)
  7. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(NEW));
  8. RETURN NEW;
  9. ELSIF (TG_OP = 'UPDATE') THEN
  10. INSERT INTO user_audit_logs(operation_type, operation_time, old_value, new_value)
  11. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(OLD), row_to_json(NEW));
  12. RETURN NEW;
  13. ELSIF (TG_OP = 'DELETE') THEN
  14. INSERT INTO user_audit_logs(operation_type, operation_time, old_value)
  15. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(OLD));
  16. RETURN OLD;
  17. END IF;
  18. RETURN NULL;
  19. END;
  20. $$ LANGUAGE plpgsql;
  21. -- Trigger for INSERT operations.
  22. CREATE TRIGGER users_insert_audit AFTER INSERT ON users FOR EACH ROW EXECUTE FUNCTION audit_users_changes();
  23. -- Trigger for UPDATE operations.
  24. CREATE TRIGGER users_update_audit AFTER UPDATE ON users FOR EACH ROW EXECUTE FUNCTION audit_users_changes();
  25. -- Trigger for DELETE operations.
  26. CREATE TRIGGER users_delete_audit AFTER DELETE ON users FOR EACH ROW EXECUTE FUNCTION audit_users_changes();
  1. Lastly, we create a simple atlas.hcl config file with a composite_schema that includes both our Ent schema and the custom triggers defined in schema.sql:

atlas.hcl

  1. data "composite_schema" "app" {
  2. # Load the ent schema first with all tables.
  3. schema "public" {
  4. url = "ent://ent/schema"
  5. }
  6. # Then, load the triggers schema.
  7. schema "public" {
  8. url = "file://schema.sql"
  9. }
  10. }
  11. env "local" {
  12. src = data.composite_schema.app.url
  13. dev = "docker://postgres/15/dev?search_path=public"
  14. }

Usage

After setting up our composite schema, we can get its representation using the atlas schema inspect command, generate schema migrations for it, apply them to a database, and more. Below are a few commands to get you started with Atlas:

Inspect the Schema

The atlas schema inspect command is commonly used to inspect databases. However, we can also use it to inspect our composite_schema and print the SQL representation of it:

  1. atlas schema inspect \
  2. --env local \
  3. --url env://src \
  4. --format '{{ sql . }}'

The command above prints the following SQL. Note, the audit_users_changes function and the triggers are defined after the users and user_audit_logs tables:

  1. -- Create "user_audit_logs" table
  2. CREATE TABLE "user_audit_logs" ("id" bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY, "operation_type" character varying NOT NULL, "operation_time" character varying NOT NULL, "old_value" character varying NULL, "new_value" character varying NULL, PRIMARY KEY ("id"));
  3. -- Create "users" table
  4. CREATE TABLE "users" ("id" bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY, "name" character varying NOT NULL, PRIMARY KEY ("id"));
  5. -- Create "audit_users_changes" function
  6. CREATE FUNCTION "audit_users_changes" () RETURNS trigger LANGUAGE plpgsql AS $$
  7. BEGIN
  8. IF (TG_OP = 'INSERT') THEN
  9. INSERT INTO user_audit_logs(operation_type, operation_time, new_value)
  10. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(NEW));
  11. RETURN NEW;
  12. ELSIF (TG_OP = 'UPDATE') THEN
  13. INSERT INTO user_audit_logs(operation_type, operation_time, old_value, new_value)
  14. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(OLD), row_to_json(NEW));
  15. RETURN NEW;
  16. ELSIF (TG_OP = 'DELETE') THEN
  17. INSERT INTO user_audit_logs(operation_type, operation_time, old_value)
  18. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(OLD));
  19. RETURN OLD;
  20. END IF;
  21. RETURN NULL;
  22. END;
  23. $$;
  24. -- Create trigger "users_delete_audit"
  25. CREATE TRIGGER "users_delete_audit" AFTER DELETE ON "users" FOR EACH ROW EXECUTE FUNCTION "audit_users_changes"();
  26. -- Create trigger "users_insert_audit"
  27. CREATE TRIGGER "users_insert_audit" AFTER INSERT ON "users" FOR EACH ROW EXECUTE FUNCTION "audit_users_changes"();
  28. -- Create trigger "users_update_audit"
  29. CREATE TRIGGER "users_update_audit" AFTER UPDATE ON "users" FOR EACH ROW EXECUTE FUNCTION "audit_users_changes"();

Generate Migrations For the Schema

To generate a migration for the schema, run the following command:

  1. atlas migrate diff \
  2. --env local

Note that a new migration file is created with the following content:

migrations/20240712090543.sql

  1. -- Create "user_audit_logs" table
  2. CREATE TABLE "user_audit_logs" ("id" bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY, "operation_type" character varying NOT NULL, "operation_time" character varying NOT NULL, "old_value" character varying NULL, "new_value" character varying NULL, PRIMARY KEY ("id"));
  3. -- Create "users" table
  4. CREATE TABLE "users" ("id" bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY, "name" character varying NOT NULL, PRIMARY KEY ("id"));
  5. -- Create "audit_users_changes" function
  6. CREATE FUNCTION "audit_users_changes" () RETURNS trigger LANGUAGE plpgsql AS $$
  7. BEGIN
  8. IF (TG_OP = 'INSERT') THEN
  9. INSERT INTO user_audit_logs(operation_type, operation_time, new_value)
  10. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(NEW));
  11. RETURN NEW;
  12. ELSIF (TG_OP = 'UPDATE') THEN
  13. INSERT INTO user_audit_logs(operation_type, operation_time, old_value, new_value)
  14. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(OLD), row_to_json(NEW));
  15. RETURN NEW;
  16. ELSIF (TG_OP = 'DELETE') THEN
  17. INSERT INTO user_audit_logs(operation_type, operation_time, old_value)
  18. VALUES (TG_OP, CURRENT_TIMESTAMP, row_to_json(OLD));
  19. RETURN OLD;
  20. END IF;
  21. RETURN NULL;
  22. END;
  23. $$;
  24. -- Create trigger "users_delete_audit"
  25. CREATE TRIGGER "users_delete_audit" AFTER DELETE ON "users" FOR EACH ROW EXECUTE FUNCTION "audit_users_changes"();
  26. -- Create trigger "users_insert_audit"
  27. CREATE TRIGGER "users_insert_audit" AFTER INSERT ON "users" FOR EACH ROW EXECUTE FUNCTION "audit_users_changes"();
  28. -- Create trigger "users_update_audit"
  29. CREATE TRIGGER "users_update_audit" AFTER UPDATE ON "users" FOR EACH ROW EXECUTE FUNCTION "audit_users_changes"();

Apply the Migrations

To apply the migration generated above to a database, run the following command:

  1. atlas migrate apply \
  2. --env local \
  3. --url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"

Triggers - 图2Apply the Schema Directly on the Database

Sometimes, there is a need to apply the schema directly to the database without generating a migration file. For example, when experimenting with schema changes, spinning up a database for testing, etc. In such cases, you can use the command below to apply the schema directly to the database:

  1. atlas schema apply \
  2. --env local \
  3. --url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable"

Or, using the Atlas Go SDK:

  1. ac, err := atlasexec.NewClient(".", "atlas")
  2. if err != nil {
  3. log.Fatalf("failed to initialize client: %w", err)
  4. }
  5. // Automatically update the database with the desired schema.
  6. // Another option, is to use 'migrate apply' or 'schema apply' manually.
  7. if _, err := ac.SchemaApply(ctx, &atlasexec.SchemaApplyParams{
  8. Env: "local",
  9. URL: "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable",
  10. }); err != nil {
  11. log.Fatalf("failed to apply schema changes: %w", err)
  12. }

The code for this guide can be found in GitHub.