MongoDB Connector

Flink provides a MongoDB connector for reading and writing data from and to MongoDB collections with at-least-once guarantees.

To use this connector, add one of the following dependencies to your project.

  1. <dependency>
  2. <groupId>org.apache.flink</groupId>
  3. <artifactId>flink-connector-mongodb</artifactId>
  4. <version>1.1.0-1.18</version>
  5. </dependency>

Copied to clipboard!

MongoDB Source

The example below shows how to configure and create a source:

Java

  1. import org.apache.flink.api.common.eventtime.WatermarkStrategy;
  2. import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
  3. import org.apache.flink.api.common.typeinfo.TypeInformation;
  4. import org.apache.flink.connector.mongodb.source.MongoSource;
  5. import org.apache.flink.connector.mongodb.source.reader.deserializer.MongoDeserializationSchema;
  6. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  7. import org.bson.BsonDocument;
  8. MongoSource<String> source = MongoSource.<String>builder()
  9. .setUri("mongodb://user:password@127.0.0.1:27017")
  10. .setDatabase("my_db")
  11. .setCollection("my_coll")
  12. .setProjectedFields("_id", "f0", "f1")
  13. .setFetchSize(2048)
  14. .setLimit(10000)
  15. .setNoCursorTimeout(true)
  16. .setPartitionStrategy(PartitionStrategy.SAMPLE)
  17. .setPartitionSize(MemorySize.ofMebiBytes(64))
  18. .setSamplesPerPartition(10)
  19. .setDeserializationSchema(new MongoDeserializationSchema<String>() {
  20. @Override
  21. public String deserialize(BsonDocument document) {
  22. return document.toJson();
  23. }
  24. @Override
  25. public TypeInformation<String> getProducedType() {
  26. return BasicTypeInfo.STRING_TYPE_INFO;
  27. }
  28. })
  29. .build();
  30. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  31. env.fromSource(source, WatermarkStrategy.noWatermarks(), "MongoDB-Source")
  32. .setParallelism(2)
  33. .print()
  34. .setParallelism(1);

Configurations

Flink’s MongoDB source is created by using the static builder MongoSource.<OutputType>builder().

  1. setUri(String uri)
  2. setDatabase(String database)
    • Required.
    • Name of the database to read from.
  3. setCollection(String collection)
    • Required.
    • Name of the collection to read from.
  4. setFetchSize(int fetchSize)
    • Optional. Default: 2048.
    • Sets the number of documents should be fetched per round-trip when reading.
  5. setNoCursorTimeout(boolean noCursorTimeout)
    • Optional. Default: true.
    • The MongoDB server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to prevent that. If a session is idle for longer than 30 minutes, the MongoDB server marks that session as expired and may close it at any time. When the MongoDB server closes the session, it also kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.
  6. setPartitionStrategy(PartitionStrategy partitionStrategy)
    • Optional. Default: PartitionStrategy.DEFAULT.
    • Sets the partition strategy. Available partition strategies are SINGLE, SAMPLE, SPLIT_VECTOR, SHARDED and DEFAULT. You can see Partition Strategies section for detail.
  7. setPartitionSize(MemorySize partitionSize)
    • Optional. Default: 64mb.
    • Sets the partition memory size of MongoDB split. Split a MongoDB collection into multiple partitions according to the partition memory size. Partitions can be read in parallel by multiple readers to speed up the overall read time.
  8. setSamplesPerPartition(int samplesPerPartition)
    • Optional. Default: 10.
    • Sets the number of samples to take per partition which is only used for the sample partition strategy SAMPLE. The sample partitioner samples the collection, projects and sorts by the partition fields. Then uses every samplesPerPartition as the value to use to calculate the partition boundaries. The total number of samples taken is: samples per partition * ( count of documents / number of documents per partition).
  9. setLimit(int limit)
    • Optional. Default: -1.
    • Sets the limit of documents for each reader to read. If limit is not set or set to -1, the documents of the entire collection will be read. If we set the parallelism of reading to be greater than 1, the maximum documents to read is equal to the parallelism * limit.
  10. setProjectedFields(String… projectedFields)
    • Optional.
    • Sets the projection fields of documents to read. If projected fields is not set, all fields of the collection will be read.
  11. setDeserializationSchema(MongoDeserializationSchema deserializationSchema)
    • Required.
    • A MongoDeserializationSchema is required for parsing MongoDB BSON documents.

Partition Strategies

Partitions can be read in parallel by multiple readers to speed up the overall read time. The following partition strategies are provided:

  • SINGLE: treats the entire collection as a single partition.
  • SAMPLE: samples the collection and generate partitions which is fast but possibly uneven.
  • SPLIT_VECTOR: uses the splitVector command to generate partitions for non-sharded collections which is fast and even. The splitVector permission is required.
  • SHARDED: reads config.chunks (MongoDB splits a sharded collection into chunks, and the range of the chunks are stored within the collection) as the partitions directly. The sharded strategy only used for sharded collection which is fast and even. Read permission of config database is required.
  • DEFAULT: uses sharded strategy for sharded collections otherwise using split vector strategy.

MongoDB Sink

The example below shows how to configure and create a sink:

Java

  1. import org.apache.flink.connector.base.DeliveryGuarantee;
  2. import org.apache.flink.connector.mongodb.sink.MongoSink;
  3. import org.apache.flink.streaming.api.datastream.DataStream;
  4. import com.mongodb.client.model.InsertOneModel;
  5. import org.bson.BsonDocument;
  6. DataStream<String> stream = ...;
  7. MongoSink<String> sink = MongoSink.<String>builder()
  8. .setUri("mongodb://user:password@127.0.0.1:27017")
  9. .setDatabase("my_db")
  10. .setCollection("my_coll")
  11. .setBatchSize(1000)
  12. .setBatchIntervalMs(1000)
  13. .setMaxRetries(3)
  14. .setDeliveryGuarantee(DeliveryGuarantee.AT_LEAST_ONCE)
  15. .setSerializationSchema(
  16. (input, context) -> new InsertOneModel<>(BsonDocument.parse(input)))
  17. .build();
  18. stream.sinkTo(sink);

Configurations

Flink’s MongoDB sink is created by using the static builder MongoSink.<InputType>builder().

  1. setUri(String uri)
  2. setDatabase(String database)
    • Required.
    • Name of the database to sink to.
  3. setCollection(String collection)
    • Required.
    • Name of the collection to sink to.
  4. setBatchSize(int batchSize)
    • Optional. Default: 1000.
    • Sets the maximum number of actions to buffer for each batch request. You can pass -1 to disable batching.
  5. setBatchIntervalMs(long batchIntervalMs)
    • Optional. Default: 1000.
    • Sets the batch flush interval, in milliseconds. You can pass -1 to disable it.
  6. setMaxRetries(int maxRetries)
    • Optional. Default: 3.
    • Sets the max retry times if writing records failed.
  7. setDeliveryGuarantee(DeliveryGuarantee deliveryGuarantee)
    • Optional. Default: DeliveryGuarantee.AT_LEAST_ONCE.
    • Sets the wanted DeliveryGuarantee. The EXACTLY_ONCE guarantee is not supported yet.
  8. setSerializationSchema(MongoSerializationSchema serializationSchema)
    • Required.
    • A MongoSerializationSchema is required for parsing input record to MongoDB WriteModel.

Fault Tolerance

With Flink’s checkpointing enabled, the Flink MongoDB Sink guarantees at-least-once delivery of write operations to MongoDB clusters. It does so by waiting for all pending write operations in the MongoWriter at the time of checkpoints. This effectively assures that all requests before the checkpoint was triggered have been successfully acknowledged by MongoDB, before proceeding to process more records sent to the sink.

More details on checkpoints and fault tolerance are in the fault tolerance docs.

To use fault tolerant MongoDB Sinks, checkpointing of the topology needs to be enabled at the execution environment:

Java

  1. final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  2. env.enableCheckpointing(5000); // checkpoint every 5000 msecs

Scala

  1. val env = StreamExecutionEnvironment.getExecutionEnvironment()
  2. env.enableCheckpointing(5000) // checkpoint every 5000 msecs

Python

  1. env = StreamExecutionEnvironment.get_execution_environment()
  2. # checkpoint every 5000 msecs
  3. env.enable_checkpointing(5000)

IMPORTANT: Checkpointing is not enabled by default but the default delivery guarantee is AT_LEAST_ONCE. This causes the sink to buffer requests until it either finishes or the `MongoWriter` flushes automatically. By default, the `MongoWriter` will flush after 1000 added write operations. To configure the writer to flush more frequently, please refer to the MongoWriter configuration section.

Using WriteModel with deterministic ids and the upsert method it is possible to achieve exactly-once semantics in MongoDB when AT_LEAST_ONCE delivery is configured for the connector.

Configuring the Internal Mongo Writer

The internal MongoWriter can be further configured for its behaviour on how write operations are flushed, by using the following methods of the MongoSinkBuilder:

  • setBatchSize(int batchSize): Maximum amount of write operations to buffer before flushing. You can pass -1 to disable it.
  • setBatchIntervalMs(long batchIntervalMs): Interval at which to flush regardless of the size of buffered write operations. You can pass -1 to disable it.

When set as follows, there are the following writing behaviours:

  • Flush when time interval or batch size exceed limit.
    • batchSize > 1 and batchInterval > 0
  • Flush only on checkpoint.
    • batchSize == -1 and batchInterval == -1
  • Flush for every single write operation.
    • batchSize == 1 or batchInterval == 0