Kafka Connect is a scalable and reliable tool for data transmission between Apache Kafka and other systems. Connectors can be defined Move large amounts of data in and out of Kafka.

Doris provides the Sink Connector plug-in, which can write data from Kafka topics to Doris.

Usage Doris Kafka Connector

Standalone mode startup

Configure connect-standalone.properties

  1. # Modify broker address
  2. bootstrap.servers=127.0.0.1:9092

Configure doris-connector-sink.properties Create doris-connector-sink.properties in the config directory and configure the following content:

  1. name=test-doris-sink
  2. connector.class=org.apache.doris.kafka.connector.DorisSinkConnector
  3. topics=topic_test
  4. doris.topic2table.map=topic_test:test_kafka_tbl
  5. buffer.count.records=10000
  6. buffer.flush.time=120
  7. buffer.size.bytes=5000000
  8. doris.urls=10.10.10.1
  9. doris.http.port=8030
  10. doris.query.port=9030
  11. doris.user=root
  12. doris.password=
  13. doris.database=test_db
  14. key.converter=org.apache.kafka.connect.storage.StringConverter
  15. value.converter=org.apache.kafka.connect.json.JsonConverter
  16. key.converter.schemas.enable=false
  17. value.converter.schemas.enable=false

Start Standalone

  1. $KAFKA_HOME/bin/connect-standalone.sh -daemon $KAFKA_HOME/config/connect-standalone.properties $KAFKA_HOME/config/doris-connector-sink.properties

Doris Kafka Connector - 图1danger

Note: It is generally not recommended to use standalone mode in a production environment.

Distributed mode startup

Configure connect-distributed.properties

  1. # Modify broker address
  2. bootstrap.servers=127.0.0.1:9092
  3. # Modify group.id, the same cluster needs to be consistent
  4. group.id=connect-cluster

Start Distributed

  1. $KAFKA_HOME/bin/connect-distributed.sh -daemon $KAFKA_HOME/config/connect-distributed.properties

Add Connector

  1. curl -i http://127.0.0.1:8083/connectors -H "Content-Type: application/json" -X POST -d '{
  2. "name":"test-doris-sink-cluster",
  3. "config":{
  4. "connector.class":"org.apache.doris.kafka.connector.DorisSinkConnector",
  5. "topics":"topic_test",
  6. "doris.topic2table.map": "topic_test:test_kafka_tbl",
  7. "buffer.count.records":"10000",
  8. "buffer.flush.time":"120",
  9. "buffer.size.bytes":"5000000",
  10. "doris.urls":"10.10.10.1",
  11. "doris.user":"root",
  12. "doris.password":"",
  13. "doris.http.port":"8030",
  14. "doris.query.port":"9030",
  15. "doris.database":"test_db",
  16. "key.converter":"org.apache.kafka.connect.storage.StringConverter",
  17. "value.converter":"org.apache.kafka.connect.json.JsonConverter",
  18. "key.converter.schemas.enable":"false",
  19. "value.converter.schemas.enable":"false",
  20. }
  21. }'

Operation Connector

  1. # View connector status
  2. curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/status -X GET
  3. # Delete connector
  4. curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster -X DELETE
  5. # Pause connector
  6. curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/pause -X PUT
  7. # Restart connector
  8. curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/resume -X PUT
  9. # Restart tasks within the connector
  10. curl -i http://127.0.0.1:8083/connectors/test-doris-sink-cluster/tasks/0/restart -X POST

Refer to: Connect REST Interface

Doris Kafka Connector - 图2danger

Note that when kafka-connect is started for the first time, three topics config.storage.topic offset.storage.topic and status.storage.topic will be created in the kafka cluster to record the shared connector configuration of kafka-connect. Offset data and status updates. How to Use Kafka Connect - Get Started

Access an SSL-certified Kafka cluster

Accessing an SSL-certified Kafka cluster through kafka-connect requires the user to provide a certificate file (client.truststore.jks) used to authenticate the Kafka Broker public key. You can add the following configuration in the connect-distributed.properties file:

  1. # Connect worker
  2. security.protocol=SSL
  3. ssl.truststore.location=/var/ssl/private/client.truststore.jks
  4. ssl.truststore.password=test1234
  5. # Embedded consumer for sink connectors
  6. consumer.security.protocol=SSL
  7. consumer.ssl.truststore.location=/var/ssl/private/client.truststore.jks
  8. consumer.ssl.truststore.password=test1234

For instructions on configuring a Kafka cluster connected to SSL authentication through kafka-connect, please refer to: Configure Kafka Connect

Dead letter queue

By default, any errors encountered during or during the conversion will cause the connector to fail. Each connector configuration can also tolerate such errors by skipping them, optionally writing the details of each error and failed operation as well as the records in question (with varying levels of detail) to a dead-letter queue for logging.

  1. errors.tolerance=all
  2. errors.deadletterqueue.topic.name=test_error_topic
  3. errors.deadletterqueue.context.headers.enable=true
  4. errors.deadletterqueue.topic.replication.factor=1

Configuration items

KeyDefault ValueRequiredDescription
name-YConnect application name, must be unique within the Kafka Connect environment
connector.class-Yorg.apache.doris.kafka.connector.DorisSinkConnector
topics-YList of subscribed topics, separated by commas. like: topic1, topic2
doris.urls-YDoris FE connection address. If there are multiple, separate them with commas. like: 10.20.30.1,10.20.30.2,10.20.30.3
doris.http.port-YDoris HTTP protocol port
doris.query.port-YDoris MySQL protocol port
doris.user-YDoris username
doris.password-YDoris password
doris.database-YThe database to write to. It can be empty when there are multiple libraries. At the same time, the specific library name needs to be configured in topic2table.map.
doris.topic2table.map-NThe corresponding relationship between topic and table table, for example: topic1:tb1,topic2:tb2
The default is empty, indicating that topic and table names correspond one to one.
The format of multiple libraries is topic1:db1.tbl1,topic2:db2.tbl2
buffer.count.records10000NThe number of records each Kafka partition buffers in memory before flushing to doris. Default 10000 records
buffer.flush.time120NBuffer refresh interval, in seconds, default 120 seconds
buffer.size.bytes5000000(5MB)NThe cumulative size of records buffered in memory for each Kafka partition, in bytes, default 5MB
jmxtrueNTo obtain connector internal monitoring indicators through JMX, please refer to: Doris-Connector-JMX
enable.deletefalseNWhether to delete records synchronously, default false
label.prefix${name}NStream load label prefix when importing data. Defaults to the Connector application name.
auto.redirecttrueNWhether to redirect StreamLoad requests. After being turned on, StreamLoad will redirect to the BE where data needs to be written through FE, and the BE information will no longer be displayed.
load.modelstream_loadNHow to import data. Supports stream_load to directly import data into Doris; also supports copy_into to import data into object storage, and then load the data into Doris.
sink.properties.*‘sink.properties.format’:’json’,
‘sink.properties.read_json_by_line’:’true’
NImport parameters for Stream Load.
For example: define column separator ‘sink.properties.column_separator’:’,’
Detailed parameter reference here
delivery.guaranteeat_least_onceNHow to ensure data consistency when consuming Kafka data is imported into Doris. Supports at_least_once exactly_once, default is at_least_once. Doris needs to be upgraded to 2.1.0 or above to ensure data exactly_once

For other Kafka Connect Sink common configuration items, please refer to: Kafka Connect Sink Configuration Properties