Confluent Avro Format

Format: Serialization Schema Format: Deserialization Schema

The Avro Schema Registry (avro-confluent) format allows you to read records that were serialized by the io.confluent.kafka.serializers.KafkaAvroSerializer and to write records that can in turn be read by the io.confluent.kafka.serializers.KafkaAvroDeserializer.

When reading (deserializing) a record with this format the Avro writer schema is fetched from the configured Confluent Schema Registry based on the schema version id encoded in the record while the reader schema is inferred from table schema.

When writing (serializing) a record with this format the Avro schema is inferred from the table schema and used to retrieve a schema id to be encoded with the data. The lookup is performed with in the configured Confluent Schema Registry under the subject given in avro-confluent.subject.

The Avro Schema Registry format can only be used in conjunction with the Apache Kafka SQL connector or the Upsert Kafka SQL Connector.

Dependencies

In order to use the Avro Schema Registry format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.

Maven dependencySQL Client
  1. <dependency>
  2. <groupId>org.apache.flink</groupId>
  3. <artifactId>flink-avro-confluent-registry</artifactId>
  4. <version>1.20.0</version>
  5. </dependency><dependency>
  6. <groupId>org.apache.flink</groupId>
  7. <artifactId>flink-avro</artifactId>
  8. <version>1.20.0</version>
  9. </dependency>
Copied to clipboard!
Download

For Maven, SBT, Gradle, or other build automation tools, please also ensure that Confluent’s maven repository at https://packages.confluent.io/maven/ is configured in your project’s build files.

How to create tables with Avro-Confluent format

Example of a table using raw UTF-8 string as Kafka key and Avro records registered in the Schema Registry as Kafka values:

  1. CREATE TABLE user_created (
  2. -- one column mapped to the Kafka raw UTF-8 key
  3. the_kafka_key STRING,
  4. -- a few columns mapped to the Avro fields of the Kafka value
  5. id STRING,
  6. name STRING,
  7. email STRING
  8. ) WITH (
  9. 'connector' = 'kafka',
  10. 'topic' = 'user_events_example1',
  11. 'properties.bootstrap.servers' = 'localhost:9092',
  12. -- UTF-8 string as Kafka keys, using the 'the_kafka_key' table column
  13. 'key.format' = 'raw',
  14. 'key.fields' = 'the_kafka_key',
  15. 'value.format' = 'avro-confluent',
  16. 'value.avro-confluent.url' = 'http://localhost:8082',
  17. 'value.fields-include' = 'EXCEPT_KEY'
  18. )

We can write data into the kafka table as follows:

  1. INSERT INTO user_created
  2. SELECT
  3. -- replicating the user id into a column mapped to the kafka key
  4. id as the_kafka_key,
  5. -- all values
  6. id, name, email
  7. FROM some_table

Example of a table with both the Kafka key and value registered as Avro records in the Schema Registry:

  1. CREATE TABLE user_created (
  2. -- one column mapped to the 'id' Avro field of the Kafka key
  3. kafka_key_id STRING,
  4. -- a few columns mapped to the Avro fields of the Kafka value
  5. id STRING,
  6. name STRING,
  7. email STRING
  8. ) WITH (
  9. 'connector' = 'kafka',
  10. 'topic' = 'user_events_example2',
  11. 'properties.bootstrap.servers' = 'localhost:9092',
  12. -- Watch out: schema evolution in the context of a Kafka key is almost never backward nor
  13. -- forward compatible due to hash partitioning.
  14. 'key.format' = 'avro-confluent',
  15. 'key.avro-confluent.url' = 'http://localhost:8082',
  16. 'key.fields' = 'kafka_key_id',
  17. -- In this example, we want the Avro types of both the Kafka key and value to contain the field 'id'
  18. -- => adding a prefix to the table column associated to the Kafka key field avoids clashes
  19. 'key.fields-prefix' = 'kafka_key_',
  20. 'value.format' = 'avro-confluent',
  21. 'value.avro-confluent.url' = 'http://localhost:8082',
  22. 'value.fields-include' = 'EXCEPT_KEY',
  23. -- subjects have a default value since Flink 1.13, though can be overridden:
  24. 'key.avro-confluent.subject' = 'user_events_example2-key2',
  25. 'value.avro-confluent.subject' = 'user_events_example2-value2'
  26. )

Example of a table using the upsert-kafka connector with the Kafka value registered as an Avro record in the Schema Registry:

  1. CREATE TABLE user_created (
  2. -- one column mapped to the Kafka raw UTF-8 key
  3. kafka_key_id STRING,
  4. -- a few columns mapped to the Avro fields of the Kafka value
  5. id STRING,
  6. name STRING,
  7. email STRING,
  8. -- upsert-kafka connector requires a primary key to define the upsert behavior
  9. PRIMARY KEY (kafka_key_id) NOT ENFORCED
  10. ) WITH (
  11. 'connector' = 'upsert-kafka',
  12. 'topic' = 'user_events_example3',
  13. 'properties.bootstrap.servers' = 'localhost:9092',
  14. -- UTF-8 string as Kafka keys
  15. -- We don't specify 'key.fields' in this case since it's dictated by the primary key of the table
  16. 'key.format' = 'raw',
  17. -- In this example, we want the Avro types of both the Kafka key and value to contain the field 'id'
  18. -- => adding a prefix to the table column associated to the kafka key field avoids clashes
  19. 'key.fields-prefix' = 'kafka_key_',
  20. 'value.format' = 'avro-confluent',
  21. 'value.avro-confluent.url' = 'http://localhost:8082',
  22. 'value.fields-include' = 'EXCEPT_KEY'
  23. )

Format Options

OptionRequiredForwardedDefaultTypeDescription
format
requiredno(none)StringSpecify what format to use, here should be ‘avro-confluent’.
avro-confluent.basic-auth.credentials-source
optionalyes(none)StringBasic auth credentials source for Schema Registry
avro-confluent.basic-auth.user-info
optionalyes(none)StringBasic auth user info for schema registry
avro-confluent.bearer-auth.credentials-source
optionalyes(none)StringBearer auth credentials source for Schema Registry
avro-confluent.bearer-auth.token
optionalyes(none)StringBearer auth token for Schema Registry
avro-confluent.properties
optionalyes(none)MapProperties map that is forwarded to the underlying Schema Registry. This is useful for options that are not officially exposed via Flink config options. However, note that Flink options have higher precedence.
avro-confluent.ssl.keystore.location
optionalyes(none)StringLocation / File of SSL keystore
avro-confluent.ssl.keystore.password
optionalyes(none)StringPassword for SSL keystore
avro-confluent.ssl.truststore.location
optionalyes(none)StringLocation / File of SSL truststore
avro-confluent.ssl.truststore.password
optionalyes(none)StringPassword for SSL truststore
avro-confluent.schema
optionalno(none)StringThe schema registered or to be registered in the Confluent Schema Registry. If no schema is provided Flink converts the table schema to avro schema. The schema provided must match the table schema.
avro-confluent.subject
optionalyes(none)StringThe Confluent Schema Registry subject under which to register the schema used by this format during serialization. By default, ‘kafka’ and ‘upsert-kafka’ connectors use ‘<topic_name>-value’ or ‘<topic_name>-key’ as the default subject name if this format is used as the value or key format. But for other connectors (e.g. ‘filesystem’), the subject option is required when used as sink.
avro-confluent.url
requiredyes(none)StringThe URL of the Confluent Schema Registry to fetch/register schemas.

Data Type Mapping

Currently, Apache Flink always uses the table schema to derive the Avro reader schema during deserialization and Avro writer schema during serialization. Explicitly defining an Avro schema is not supported yet. See the Apache Avro Format for the mapping between Avro and Flink DataTypes.

In addition to the types listed there, Flink supports reading/writing nullable types. Flink maps nullable types to Avro union(something, null), where something is the Avro type converted from Flink type.

You can refer to Avro Specification for more information about Avro types.