CREATE CHANGEFEED
The CREATE CHANGEFEED
statement creates a new changefeed, which provides row-level change subscriptions.
Changefeeds target a whitelist of tables, called the "watched rows." Every change to a watched row is emitted as a record in a configurable format (JSON
) to a configurable sink (Kafka or a cloud storage sink).
For more information, see Change Data Capture.
Note:
CREATE CHANGEFEED
is an enterprise-only feature. For the core version, see EXPERIMENTAL CHANGEFEED FOR
.
Required privileges
Changefeeds can only be created by superusers, i.e., members of the admin
role. The admin role exists by default with root
as the member.
Synopsis
Parameters
Parameter | Description |
---|---|
tablename | The name of the table (or tables in a comma separated list) to create a changefeed for. |
sink | The location of the configurable sink. The scheme of the URI indicates the type. For more information, see Sink URI below. |
option / value | For a list of available options and their values, see Options below. |
### Sink URI
The sink URI follows the basic format of:
'[scheme]://[host]:[port]?[query_parameters]'
The
scheme
can be kafka
or any cloud storage sink.#### Kafka
Example of a Kafka sink URI:
'kafka://broker.address.com:9092?topic_prefix=bar&tlsenabled=true&ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ&sasl_enabled=true&sasl_user=petee&sasl_password=bones'
Query parameters include:
Parameter | Value | Description |
---|---|---|
topic_prefix | STRING | Adds a prefix to all topic names.For example, CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://…?topic_prefix=bar' would emit rows under the topic bar_foo instead of foo . |
tls_enabled=true | BOOL | If true , enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a ca_cert (see below). |
ca_cert | STRING | The base64-encoded ca_cert file.Note: To encode your ca.cert , run base64 -w 0 ca.cert . |
sasl_enabled | BOOL | If true , use SASL/PLAIN to authenticate. This requires a sasl_user and sasl_password (see below). |
sasl_user | STRING | Your SASL username. |
sasl_password | STRING | Your SASL password. |
Cloud storage sink
New in v19.1: Use a cloud storage sink to deliver changefeed data to OLAP or big data systems without requiring transport via Kafka.
Note:
Currently, cloud storage sinks only work with JSON
and emits newline-delimited JSON
files.
Example of a cloud storage sink (i.e., AWS S3) URI:
'experimental-s3://test-s3encryption/test?AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPQ&AWS_SECRET_ACCESS_KEY=LS0tLS1CRUdJTiBDRVJUSUZ'
Note:
The scheme
for a cloud storage sink should be prepended with experimental-
.
Any of the cloud storages below can be used as a sink:
[scheme]://[host]/[path]?[parameters]
Location | Scheme | Host | Parameters |
---|---|---|---|
Amazon S3 | s3 | Bucket name | AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , AWS_SESSION_TOKEN |
Azure | azure | N/A (see Example file URLs | AZURE_ACCOUNT_KEY , AZURE_ACCOUNT_NAME |
Google Cloud 1 | gs | Bucket name | AUTH (optional; can be default , implicit , or specified ), CREDENTIALS |
HTTP 2 | http | Remote host | N/A |
NFS/Local 3 | nodelocal | Empty or nodeID 4 (see Example file URLs) | N/A |
S3-compatible services 5 | s3 | Bucket name | AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY , AWS_SESSION_TOKEN , AWS_REGION 6 (optional), AWS_ENDPOINT |
Note:
The location parameters often contain special characters that need to be URI-encoded. Use Javascript's encodeURIComponent function or Go language's url.QueryEscape function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
Note:
If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard HTTP_PROXY
and HTTPS_PROXY
environment variables when starting CockroachDB.
1If the
AUTH
parameter is not specified, thecloudstorage.gs.default.key
cluster setting will be used if it is non-empty, otherwise theimplicit
behavior is used. If theAUTH
parameter isimplicit
, all GCS connections use Google's default authentication strategy. If theAUTH
parameter isdefault
, thecloudstorage.gs.default.key
cluster setting must be set to the contents of a service account file which will be used during authentication. New in v19.1: If theAUTH
parameter isspecified
, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in theCREDENTIALS
parameter. The JSON key object should be base64-encoded (using the standard encoding in RFC 4648).2 You can create your own HTTP server with Caddy or nginx. A custom root CA can be appended to the system's default CAs by setting the
cloudstorage.http.custom_ca
cluster setting, which will be used when verifying certificates from HTTPS URLs.3 The file system backup location on the NFS drive is relative to the path specified by the
—external-io-dir
flag set while starting the node. If the flag is set todisabled
, then imports from local directories and NFS drives are disabled.4New in v19.1: The host component of NFS/Local can either be empty or the
nodeID
. If thenodeID
is specified, it is currently ignored (i.e., any node can be sent work and it will look in its local input/output directory); however, thenodeID
will likely be required in the future.5 A custom root CA can be appended to the system's default CAs by setting the
cloudstorage.http.custom_ca
cluster setting, which will be used when verifying certificates from an S3-compatible service.6 The
AWS_REGION
parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
Example file URLs
Location | Example |
---|---|
Amazon S3 | s3://acme-co/employees.sql?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456 |
Azure | azure://employees.sql?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co |
Google Cloud | gs://acme-co/employees.sql |
HTTP | http://localhost:8080/employees.sql |
NFS/Local | nodelocal:///employees.sql , nodelocal://2/employees.sql |
Options
Option | Value | Description |
---|---|---|
updated | N/A | Include updated timestamps with each row.If a cursor is provided, the "updated" timestamps will match the MVCC timestamps of the emitted rows, and there is no initial scan. If a cursor is not provided, the changefeed will perform an initial scan (as of the time the changefeed was created), and the "updated" timestamp for each change record emitted in the initial scan will be the timestamp of the initial scan. Similarly, when a backfill is performed for a schema change, the "updated" timestamp is set to the first timestamp for when the new schema is valid. |
resolved | INTERVAL | Periodically emit resolved timestamps to the changefeed. Optionally, set a minimum duration between emitting resolved timestamps. If unspecified, all resolved timestamps are emitted.Example: resolved='10s' |
envelope | key_only / wrapped | Use key_only to emit only the key and no value, which is faster if you only want to know when the key changes.Default: envelope=wrapped |
cursor | Timestamp | Emits any changes after the given timestamp, but does not output the current state of the table first. If cursor is not specified, the changefeed starts by doing an initial scan of all the watched rows and emits the current value, then moves to emitting any changes that happen after the scan.When starting a changefeed at a specific cursor , the cursor cannot be before the configured garbage collection window (see gc.ttlseconds ) for the table you're trying to follow; otherwise, the changefeed will error. With default garbage collection settings, this means you cannot create a changefeed that starts more than 25 hours in the past.cursor can be used to start a new changefeed where a previous changefeed ended.Example: CURSOR=1536242855577149065.0000000000 |
format | json / experimental_avro | Format of the emitted record. Currently, support for Avro is limited and experimental. For mappings of CockroachDB types to Avro types, see the table below. Default: format=json . |
confluent_schema_registry | Schema Registry address | The Schema Registry address is required to use experimental_avro . |
Avro limitations
Currently, support for Avro is limited and experimental. Below is a list of unsupported SQL types and values for Avro changefeeds:
- Decimals must have precision specified.
- Decimals with
NaN
or infinite values cannot be written in Avro.
Note:
To avoid NaN
or infinite values, add a CHECK
constraint to prevent these values from being inserted into decimal columns.
TIME
,DATE
,INTERVAL
,UUID
,INET
,ARRAY
,JSONB
,BIT
, and collatedSTRING
are not supported in Avro yet.
Avro types
Below is a mapping of CockroachDB types to Avro types:
Responses
The messages (i.e., keys and values) emitted to a Kafka topic are specific to the envelope
. The default format is wrapped
, and the output messages are composed of the following:
- Key: An array always composed of the row's
PRIMARY KEY
field(s) (e.g.,[1]
forJSON
or{"id":1}
for Avro). - Value:
- One of three possible top-level fields:
after
, which contains the state of the row after the update (ornull
' forDELETE
s).updated
, which contains the updated timestamp.resolved
, which is emitted for records representing resolved timestamps. These records do not include an "after" value since they only function as checkpoints.
- For
INSERT
andUPDATE
, the current state of the row inserted or updated. - For
DELETE
,null
.
For example:
- One of three possible top-level fields:
Statement | Response |
---|---|
INSERT INTO office_dogs VALUES (1, 'Petee'); | JSON: [1] {"after": {"id": 1, "name": "Petee"}} Avro: {"id":1} {"id":1,"name":{"string":"Petee"}} |
DELETE FROM office_dogs WHERE name = 'Petee' | JSON: [1] {"after": null} Avro: {"id":1} {null} |
Examples
Create a changefeed connected to Kafka
> CREATE CHANGEFEED FOR TABLE name INTO 'kafka://host:port' WITH updated, resolved;
+--------------------+
| job_id |
+--------------------+
| 360645287206223873 |
+--------------------+
(1 row)
For more information on how to create a changefeed connected to Kafka, see Change Data Capture.
Create a changefeed connected to Kafka using Avro
> CREATE CHANGEFEED FOR TABLE name INTO 'kafka://host:port' WITH format = experimental_avro, confluent_schema_registry = <schema_registry_address>;
+--------------------+
| job_id |
+--------------------+
| 360645287206223873 |
+--------------------+
(1 row)
For more information on how to create a changefeed that emits an Avro record, see Change Data Capture.
Create a changefeed connected to a cloud storage sink
Warning:
This is an experimental feature. The interface and output are subject to change.
> CREATE CHANGEFEED FOR TABLE name INTO 'experimental-scheme://host?parameters' WITH updated, resolved;
+--------------------+
| job_id |
+--------------------+
| 360645287206223873 |
+--------------------+
(1 row)
For more information on how to create a changefeed connected to a cloud storage sink, see Change Data Capture.
Manage a changefeed
Use the following SQL statements to pause, resume, and cancel a changefeed.
Note:
Changefeed-specific SQL statements (e.g., CANCEL CHANGEFEED
) will be added in the future.
Pause a changefeed
> PAUSE JOB job_id;
For more information, see PAUSE JOB
.
Resume a paused changefeed
> RESUME JOB job_id;
For more information, see RESUME JOB
.
Cancel a changefeed
> CANCEL JOB job_id;
For more information, see CANCEL JOB
.
Start a new changefeed where another ended
Find the high-water timestamp for the ended changefeed:
> SELECT * FROM crdb_internal.jobs WHERE job_id = <job_id>;
job_id | job_type | ... | high_water_timestamp | error | coordinator_id
+--------------------+------------+ ... +--------------------------------+-------+----------------+
383870400694353921 | CHANGEFEED | ... | 1537279405671006870.0000000000 | | 1
(1 row)
Use the high_water_timestamp
to start the new changefeed:
> CREATE CHANGEFEED FOR TABLE name INTO 'kafka//host:port' WITH cursor = <high_water_timestamp>;
Note that because the cursor is provided, the initial scan is not performed.