Transactions
CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees ACID semantics spanning arbitrary tables and rows, even when data is distributed. If a transaction succeeds, all mutations are applied together with virtual simultaneity. If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. CockroachDB guarantees that while a transaction is pending, it is isolated from other concurrent transactions with serializable isolation.
Note:
For a detailed discussion of CockroachDB transaction semantics, see How CockroachDB Does Distributed Atomic Transactions and Serializable, Lockless, Distributed: Isolation in CockroachDB. Note that the explanation of the transaction model described in this blog post is slightly out of date. See the Transaction Retries section for more details.
SQL statements
Each of the following SQL statements control transactions in some way.
Statement | Function |
---|---|
BEGIN | Initiate a transaction, as well as control its priority. |
SET TRANSACTION | Control a transaction's priority. |
SAVEPOINT | Declare the transaction as retryable. This lets you retry the transaction if it doesn't succeed because a higher priority transaction concurrently or recently accessed the same values. |
RELEASE SAVEPOINT | Commit a retryable transaction. |
COMMIT | Commit a non-retryable transaction or clear the connection after committing a retryable transaction. |
ROLLBACK TO SAVEPOINT | Handle retry errors by rolling back a transaction's changes and increasing its priority. |
ROLLBACK | Abort a transaction and roll the database back to its state before the transaction began. |
SHOW | Display the current transaction settings. |
Syntax
In CockroachDB, a transaction is set up by surrounding SQL statements with the BEGIN
and COMMIT
statements.
To use client-side transaction retries, you should also include the SAVEPOINT
, ROLLBACK TO SAVEPOINT
and RELEASE SAVEPOINT
statements.
> BEGIN;
> SAVEPOINT cockroach_restart;
<transaction statements>
> RELEASE SAVEPOINT cockroach_restart;
> COMMIT;
At any time before it's committed, you can abort the transaction by executing the ROLLBACK
statement.
Clients using transactions must also include logic to handle retries.
Error handling
To handle errors in transactions, you should check for the following types of server-side errors:
Type | Description |
---|---|
Retry Errors | Errors with the code 40001 or string retry transaction , which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See client-side transaction retries for more details. |
Ambiguous Errors | Errors with the code 40003 that are returned in response to RELEASE SAVEPOINT (or COMMIT when not using SAVEPOINT ), which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. See here for more about this kind of error. |
SQL Errors | All other errors, which indicate that a statement in the transaction failed. For example, violating the UNIQUE constraint generates an 23505 error. After encountering these errors, you can either issue a COMMIT or ROLLBACK to abort the transaction and revert the database to its state before the transaction began.If you want to attempt the same set of statements again, you must begin a completely new transaction. |
Transaction contention
Transactions in CockroachDB lock data resources that are written during their execution. When a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. When a dependency cycle is detected between transactions, the transaction with the higher priority aborts the dependent transaction to avoid deadlock, which much be retried.
For more details about transaction contention and best practices for avoiding contention, see Understanding and Avoiding Transaction Contention.
Transaction retries
Transactions may require retries if they experience deadlock or read/write contention with other concurrent transactions which cannot be resolved without allowing potential serializable anomalies. (However, it's possible to mitigate read-write conflicts by performing reads using AS OF SYSTEM TIME
.)
There are two cases in which transaction retries occur:
- Automatic retries, which CockroachDB processes for you.
- Client-side intervention, which your application must handle.
Automatic retries
CockroachDB automatically retries individual statements (implicit transactions) and transactions sent from the client as a single batch, as long as the size of the results being produced for the client, including protocol overhead, is less than 16KiB by default. Once that buffer overflows, CockroachDB starts streaming results back to the client, at which point automatic retries cannot be performed any more. As long as the results of a single statement or batch of statements are known to stay clear of this limit, the client does not need to worry about transaction retries.
Tip:
You can change the results buffer size for all new sessions using the sql.defaults.results.buffer.size
cluster setting, or for a specific session using the results_buffer_size
session variable. Note, however, that decreasing the buffer size can increase the number of transaction retry errors a client receives, whereas increasing the buffer size can increase the delay until the client receives the first result row.
In future versions of CockroachDB, we plan on providing stronger guarantees for read-only queries that return at most one row, regardless of the size of that row.
Individual statements
Individual statements are treated as implicit transactions, and so they fallunder the rules described above. If the results are small enough, they will beautomatically retried. In particular, INSERT/UPDATE/DELETE
statements withouta RETURNING
clause are guaranteed to have minuscule result sizes.For example, the following statement would be automatically retried by CockroachDB:
> DELETE FROM customers WHERE id = 1;
Batched statements
Transactions can be sent from the client as a single batch. Batching implies that CockroachDB receives multiple statements without being asked to return results in between them; instead, CockroachDB returns results after executing all of the statements, except when the accumulated results overflow the buffer mentioned above, in which case they are returned sooner and automatic retries can no longer be performed.
Batching is generally controlled by your driver or client's behavior. Technically, it can be achieved in two ways, both supporting automatic retries:
When the client/driver is using the PostgreSQL Extended Query protocol, a batch is made up of all queries sent in between two
Sync
messages. Many drivers support such batches through explicit batching constructs. Auto-retrying of such batches is supported as of CockroachDB v2.0.When the client/driver is using the PostgreSQL Simple Query protocol, a batch is made up of semicolon-separated strings sent as a unit to CockroachDB. For example, in Go, this code would send a single batch (which would be automatically retried):
db.Exec(
"BEGIN;
DELETE FROM customers WHERE id = 1;
DELETE orders WHERE customer = 1;
COMMIT;"
)
Note:
Within a batch of statements, CockroachDB infers that the statements are notconditional on the results of previous statements, so it can retry all of them.Of course, if the transaction relies on conditional logic (e.g., statement 2 isexecuted only for some results of statement 1), then the transaction cannot beall sent to CockroachDB as a single batch. In these common cases, CockroachDBcannot retry, say, statement 2 in isolation. Since results for statement 1 havealready been delivered to the client by the time statement 2 is forcing thetransaction to retry, the client needs to be involved in retrying the wholetransaction and so you should write your transactions to useclient-side intervention.
Client-side intervention
Your application should include client-side retry handling when the statements are sent individually, such as:
> BEGIN;
> UPDATE products SET inventory = 0 WHERE sku = '8675309';
> INSERT INTO orders (customer, status) VALUES (1, 'new');
> COMMIT;
To indicate that a transaction must be retried, CockroachDB signals an error with the code 40001
and an error message that begins with the string "retry transaction"
.
To handle these types of errors you have two options:
Strongly recommended: Use the
SAVEPOINT
statement to create retryable transactions. Retryable transactions can improve performance because their priority is increased each time they are retried, making them more likely to succeed the longer they're in your system. For instructions showing how to do this, see the Client-Side Transaction Retries section.Abort the transaction using the
ROLLBACK
statement, and then reissue all of the statements in the transaction. This does not automatically increase the transaction's priority as with option #1, so it's possible in high-contention workloads for transactions to take an incredibly long time to succeed.
Client-side transaction retries
Overview
To improve the performance of transactions that fail due to contention, CockroachDB includes a set of statements (listed below) that let you retry those transactions. Retrying transactions using these statements has the following benefits:
- Transactions increase their priority each time they're retried, increasing the likelihood they will succeed.
Retried transactions are more likely to read the freshest data. Because they are issued at a later timestamp, the transaction operates on a later snapshot of the database; therefore, the reads may return more recently updated data.
Implementing client-side retries requires using the following statements:SAVEPOINT
declares the client's intent to retry the transaction if there are contention errors. It must be executed afterBEGIN
but before the first statement that manipulates a database.ROLLBACK TO SAVEPOINT
is used when your application detects40001
/"retry transaction"
errors. It provides you a chance to "retry" the transaction by rolling the database's state back to the beginning of the transaction and increasing the transaction's priority. After issuing aROLLBACK
, you must issue any statements you want the transaction to contain. Typically, this means recalculating values and reissuing a similar set of statements to the previous attempt.RELEASE SAVEPOINT
commits the transaction. At this point, CockroachDB checks to see if the transaction contends with others for access to the same values; the highest priority transaction succeeds, and the others return40001
/"retry transaction"
errors and must be retried. Finally, you must executeCOMMIT
afterRELEASE SAVEPOINT
to clear the connection for the next transaction.
For examples showing how to use these statements, see the following:
- The Syntax section of this page.
- Many of our Build an App with CockroachDB tutorials show code samples for issuing retries. For an example showing how to implement the retry logic, see the Java/JDBC tutorial.
Client library support
If you're building an application in the following languages, we have packages to make client-side retries simpler:
Go developers can use the
crdb
package of the CockroachDB Go client. For more information, see Build a Go App with CockroachDB.Python developers can use the
sqlalchemy
package. For more information, see Build a Python App with CockroachDB.Java developers accessing the database with JDBC can re-use the example code implementing the retry logic shown in Build a Java app with CockroachDB.
How transaction retries work
For greater detail, here's the process a retryable transaction goes through.
The transaction starts with the
BEGIN
statement.The
SAVEPOINT
statement declares the intention to retry the transaction in the case of contention errors. Note that CockroachDB's savepoint implementation does not support all savepoint functionality, such as nested transactions.The statements in the transaction are executed.
If a statement returns a retry error (identified via the
40001
error code or"retry transaction"
string at the start of the error message), you can issue theROLLBACK TO SAVEPOINT
statement to restart the transaction. Alternately, the originalSAVEPOINT
statement can be reissued to restart the transaction.
You must now issue the statements in the transaction again.
In cases where you do not want the application to retry the transaction, you can simply issue ROLLBACK
at this point. Any other statements will be rejected by the server, as is generally the case after an error has been encountered and the transaction has not been closed.
- Once the transaction executes all statements without encountering contention errors, execute
RELEASE SAVEPOINT
to commit the changes. If this succeeds, all changes made by the transaction become visible to subsequent transactions and are guaranteed to be durable if a crash occurs.
In some cases, the RELEASE SAVEPOINT
statement itself can fail with a retry error, mainly because transactions in CockroachDB only realize that they need to be restarted when they attempt to commit. If this happens, the retry error is handled as described in step 4.
Customizing the savepoint name
New in v19.1: Set the force_savepoint_restart
session variable to true
to enable using a custom name for the restart savepoint (for example, because you are using an ORM that wants to use its own names for savepoints).
Once this variable is set, the SAVEPOINT
statement will accept any name for the savepoint, not just cockroach_restart
. This allows compatibility with existing code that uses a single savepoint per transaction as long as that savepoint occurs before any statements that access data stored in non-virtual tables.
Warning:
The force_savepoint_restart
variable changes the semantics of CockroachDB savepoints so that RELEASE SAVEPOINT <your-custom-name>
functions as a real commit. Note that the existence of this variable and its behavior does not change the fact that CockroachDB savepoints can only be used as a part of the transaction retry protocol.
Transaction priorities
Every transaction in CockroachDB is assigned an initial priority. By default, that priority is NORMAL
, but for transactions that should be given preference in high-contention scenarios, the client can set the priority within the BEGIN
statement:
> BEGIN PRIORITY <LOW | NORMAL | HIGH>;
Alternately, the client can set the priority immediately after the transaction is started as follows:
> SET TRANSACTION PRIORITY <LOW | NORMAL | HIGH>;
The client can also display the current priority of the transaction with SHOW TRANSACTION PRIORITY
.
Note:
When two transactions contend for the same resources indirectly, they may create a dependency cycle leading to a deadlock situation, where both transactions are waiting on the other to finish. In these cases, CockroachDB allows the transaction with higher priority to abort the other, which must then retry. On retry, the transaction inherits the higher priority. This means that each retry makes a transaction more likely to succeed in the event it again experiences deadlock.
Isolation levels
CockroachDB executes all transactions at the strongest ANSI transaction isolation level: SERIALIZABLE
. All other ANSI transaction isolation levels (e.g., SNAPSHOT
, READ UNCOMMITTED
, READ COMMITTED
, and REPEATABLE READ
) are automatically upgraded to SERIALIZABLE
. Weaker isolation levels have historically been used to maximize transaction throughput. However, recent research has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks.
Note:
For a detailed discussion of isolation in CockroachDB transactions, see Serializable, Lockless, Distributed: Isolation in CockroachDB.
Serializable isolation
With SERIALIZABLE
isolation, a transaction behaves as though it has the entire database all to itself for the duration of its execution. This means that no concurrent writers can affect the transaction unless they commit before it starts, and no concurrent readers can be affected by the transaction until it has successfully committed. This is the strongest level of isolation provided by CockroachDB and it's the default.
SERIALIZABLE
isolation permits no anomalies. To prevent write skew anomalies, SERIALIZABLE
isolation may require transaction restarts. For a demonstration of SERIALIZABLE
preventing write skew, see Serializable Transactions.
Comparison to ANSI SQL isolation levels
CockroachDB uses slightly different isolation levels than ANSI SQL isolation levels#Isolation_levels).
Aliases
SNAPSHOT
, READ UNCOMMITTED
, READ COMMITTED
, and REPEATABLE READ
are aliases for SERIALIZABLE
.
Comparison
The CockroachDB SERIALIZABLE
level is stronger than the ANSI SQL READ UNCOMMITTED
, READ COMMITTED
, and REPEATABLE READ
levels and equivalent to the ANSI SQL SERIALIZABLE
level.
For more information about the relationship between these levels, see this paper.