Oracle-CDC

Overview

The Oracle Extract Node allows for reading snapshot data and incremental data from Oracle database. This document describes how to setup the Oracle Extract Node to run SQL queries against Oracle databases.

Supported Version

Extract NodeVersionDriver
Oracle-CDCOracle: 11, 12, 19Oracle Driver: 19.3.0.0

Dependencies

In order to setup the Oracle Extract Node, the following table provides dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with Sort Connectors JAR bundles.

Maven dependency

  1. <dependency>
  2. <groupId>org.apache.inlong</groupId>
  3. <artifactId>sort-connector-oracle-cdc</artifactId>
  4. <version>2.1.0-SNAPSHOT</version>
  5. </dependency>

The Oracle driver dependency is also required to connect to Oracle database. Please download ojdbc8-19.3.0.0.jar and put it into FLINK_HOME/lib/.

Setup Oracle

You have to enable log archiving for Oracle database and define an Oracle user with appropriate permissions on all databases that the Debezium Oracle connector monitors.

For Non-CDB database

  • Enable log archiving

    (1.1). Connect to the database as DBA

    1. ORACLE_SID=SID
    2. export ORACLE_SID
    3. sqlplus /nolog
    4. CONNECT sys/password AS SYSDBA

    (1.2). Enable log archiving

    1. alter system set db_recovery_file_dest_size = 10G;
    2. alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
    3. shutdown immediate;
    4. startup mount;
    5. alter database archivelog;
    6. alter database open;

    Note:

    • Enable log archiving requires database restart, pay attention when try to do it

    • The archived logs will occupy a large amount of disk space, so consider clean the expired logs the periodically

      (1.3). Check whether log archiving is enabled

      1. -- Should now "Database log mode: Archive Mode"
      2. archive log list;

      Note:

      Supplemental logging must be enabled for captured tables or the database in order for data changes to capture the before state of changed database rows. The following illustrates how to configure this on the table/database level.

      1. -- Enable supplemental logging for a specific table:
      2. ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
      1. -- Enable supplemental logging for database
      2. ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
  • Create an Oracle user with permissions

    (2.1). Create Tablespace

    1. sqlplus sys/password@host:port/SID AS SYSDBA;
    2. CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/SID/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
    3. exit;

    (2.2). Create a user and grant permissions

    1. sqlplus sys/password@host:port/SID AS SYSDBA;
    2. CREATE USER flinkuser IDENTIFIED BY flinkpw DEFAULT TABLESPACE LOGMINER_TBS QUOTA UNLIMITED ON LOGMINER_TBS;
    3. GRANT CREATE SESSION TO flinkuser;
    4. GRANT SET CONTAINER TO flinkuser;
    5. GRANT SELECT ON V_$DATABASE to flinkuser;
    6. GRANT FLASHBACK ANY TABLE TO flinkuser;
    7. GRANT SELECT ANY TABLE TO flinkuser;
    8. GRANT SELECT_CATALOG_ROLE TO flinkuser;
    9. GRANT EXECUTE_CATALOG_ROLE TO flinkuser;
    10. GRANT SELECT ANY TRANSACTION TO flinkuser;
    11. GRANT LOGMINING TO flinkuser;
    12. GRANT CREATE TABLE TO flinkuser;
    13. GRANT LOCK ANY TABLE TO flinkuser;
    14. GRANT ALTER ANY TABLE TO flinkuser;
    15. GRANT CREATE SEQUENCE TO flinkuser;
    16. GRANT EXECUTE ON DBMS_LOGMNR TO flinkuser;
    17. GRANT EXECUTE ON DBMS_LOGMNR_D TO flinkuser;
    18. GRANT SELECT ON V_$LOG TO flinkuser;
    19. GRANT SELECT ON V_$LOG_HISTORY TO flinkuser;
    20. GRANT SELECT ON V_$LOGMNR_LOGS TO flinkuser;
    21. GRANT SELECT ON V_$LOGMNR_CONTENTS TO flinkuser;
    22. GRANT SELECT ON V_$LOGMNR_PARAMETERS TO flinkuser;
    23. GRANT SELECT ON V_$LOGFILE TO flinkuser;
    24. GRANT SELECT ON V_$ARCHIVED_LOG TO flinkuser;
    25. GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO flinkuser;
    26. exit;

For CDB database

Overall, the steps for configuring CDB database is quite similar to non-CDB database, but the commands may be different.

  • Enable log archiving

    1. ORACLE_SID=ORCLCDB
    2. export ORACLE_SID
    3. sqlplus /nolog
    4. CONNECT sys/password AS SYSDBA
    5. alter system set db_recovery_file_dest_size = 10G;
    6. -- should exist
    7. alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
    8. shutdown immediate
    9. startup mount
    10. alter database archivelog;
    11. alter database open;
    12. -- Should show "Database log mode: Archive Mode"
    13. archive log list
    14. exit;

    Note: You can also use the following commands to enable supplemental logging:

    1. -- Enable supplemental logging for a specific table:
    2. ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    3. -- Enable supplemental logging for database
    4. ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
  • Create an Oracle user with permissions

    1. sqlplus sys/password@//localhost:1521/ORCLCDB as sysdba
    2. CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
    3. exit
    1. sqlplus sys/password@//localhost:1521/ORCLPDB1 as sysdba
    2. CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
    3. exit
    1. sqlplus sys/password@//localhost:1521/ORCLCDB as sysdba
    2. CREATE USER flinkuser IDENTIFIED BY flinkpw DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL;
    3. GRANT CREATE SESSION TO flinkuser CONTAINER=ALL;
    4. GRANT SET CONTAINER TO flinkuser CONTAINER=ALL;
    5. GRANT SELECT ON V_$DATABASE to flinkuser CONTAINER=ALL;
    6. GRANT FLASHBACK ANY TABLE TO flinkuser CONTAINER=ALL;
    7. GRANT SELECT ANY TABLE TO flinkuser CONTAINER=ALL;
    8. GRANT SELECT_CATALOG_ROLE TO flinkuser CONTAINER=ALL;
    9. GRANT EXECUTE_CATALOG_ROLE TO flinkuser CONTAINER=ALL;
    10. GRANT SELECT ANY TRANSACTION TO flinkuser CONTAINER=ALL;
    11. GRANT LOGMINING TO flinkuser CONTAINER=ALL;
    12. GRANT CREATE TABLE TO flinkuser CONTAINER=ALL;
    13. -- Dont need to execute this statement, If you set 'scan.incremental.snapshot.enabled=true' (default).
    14. GRANT LOCK ANY TABLE TO flinkuser CONTAINER=ALL;
    15. GRANT CREATE SEQUENCE TO flinkuser CONTAINER=ALL;
    16. GRANT EXECUTE ON DBMS_LOGMNR TO flinkuser CONTAINER=ALL;
    17. GRANT EXECUTE ON DBMS_LOGMNR_D TO flinkuser CONTAINER=ALL;
    18. GRANT SELECT ON V_$LOG TO flinkuser CONTAINER=ALL;
    19. GRANT SELECT ON V_$LOG_HISTORY TO flinkuser CONTAINER=ALL;
    20. GRANT SELECT ON V_$LOGMNR_LOGS TO flinkuser CONTAINER=ALL;
    21. GRANT SELECT ON V_$LOGMNR_CONTENTS TO flinkuser CONTAINER=ALL;
    22. GRANT SELECT ON V_$LOGMNR_PARAMETERS TO flinkuser CONTAINER=ALL;
    23. GRANT SELECT ON V_$LOGFILE TO flinkuser CONTAINER=ALL;
    24. GRANT SELECT ON V_$ARCHIVED_LOG TO flinkuser CONTAINER=ALL;
    25. GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO flinkuser CONTAINER=ALL;
    26. exit

See more about the Setting up Oracle

How to create an Oracle Extract Node

Usage for SQL API

The Oracle Extract Node can be defined as following:

  1. -- Create an Oracle Extract Node 'user' in Flink SQL
  2. Flink SQL> CREATE TABLE oracle_extract_node (
  3. ID INT NOT NULL,
  4. NAME STRING,
  5. DESCRIPTION STRING,
  6. WEIGHT DECIMAL(10, 3),
  7. PRIMARY KEY(id) NOT ENFORCED
  8. ) WITH (
  9. 'connector' = 'oracle-cdc-inlong',
  10. 'hostname' = 'localhost',
  11. 'port' = '1521',
  12. 'username' = 'flinkuser',
  13. 'password' = 'flinkpw',
  14. 'database-name' = 'XE',
  15. 'schema-name' = 'inlong',
  16. 'table-name' = 'user');
  17. -- Read snapshot and redo logs from products table
  18. Flink SQL> SELECT * FROM oracle_extract_node;

Note: When working with the CDB + PDB model, you are expected to add an extra option 'debezium.database.pdb.name' = 'xxx' in Flink DDL to specific the name of the PDB to connect to.

Usage for InLong Dashboard

TODO: It will be supported in the future.

Usage for InLong Manager Client

TODO: It will be supported in the future.

Oracle Extact Node Options

OptionRequiredDefaultTypeDescription
connectorrequired(none)StringSpecify what connector to use, here should be oracle-cdc-inlong.
hostnamerequired(none)StringIP address or hostname of the Oracle database server.
usernamerequired(none)StringName of the Oracle database to use when connecting to the Oracle database server.
passwordrequired(none)StringPassword to use when connecting to the Oracle database server.
database-namerequired(none)StringDatabase name of the Oracle server to monitor.
schema-namerequired(none)StringSchema name of the Oracle database to monitor.
table-namerequired(none)StringTable name of the Oracle database to monitor. The value is of the form <schema_name>.<table_name>
portoptional1521IntegerInteger port number of the Oracle database server.
scan.startup.modeoptionalinitialStringOptional startup mode for Oracle CDC consumer, valid enumerations are “initial” and “latest-offset”. Please see Startup Reading Positionsection for more detailed information.
debezium.*optional(none)StringPass-through Debezium’s properties to Debezium Embedded Engine which is used to capture data changes from Oracle server. For example: ‘debezium.snapshot.mode’ = ‘never’. See more about the Debezium’s Oracle Connector properties
inlong.metric.labelsoptional(none)StringInlong metric label, format of value is groupId=[groupId]&streamId=[streamId]&nodeId=[nodeId].
source.multiple.enableoptionalfalseBooleanWhether to enable multiple schema and table migration. If it is’ true ‘, Oracle Extract Node will compress the physical field of the table into a special meta field ‘data_canal’ in the format of ‘canal json’.
scan.incremental.snapshot.enabledoptionaltrueBooleanIncremental snapshot is a new mechanism to read snapshot of a table. Compared to the old snapshot mechanism, the incremental snapshot has many advantages, including: (1) source can be parallel during snapshot reading, (2) source can perform checkpoints in the chunk granularity during snapshot reading, (3) source doesn’t need to acquire ROW SHARE MODE lock before snapshot reading.
scan.incremental.snapshot.chunk.sizeoptional8096IntegerThe chunk size (number of rows) of table snapshot, captured tables are split into multiple chunks when read the snapshot of table.
scan.snapshot.fetch.sizeoptional1024IntegerThe maximum fetch size for per poll when read table snapshot.
connect.max-retriesoptional3IntegerThe max retry times that the connector should retry to build Oracle database server connection.
chunk-meta.group.sizeoptional1000IntegerThe group size of chunk meta, if the meta size exceeds the group size, the meta will be divided into multiple groups.
connect.timeoutoptional30sDurationThe maximum time that the connector should wait after trying to connect to the Oracle database server before timing out.
chunk-key.even-distribution.factor.lower-boundoptional0.05dDoubleThe lower bound of chunk key distribution factor. The distribution factor is used to determine whether the table is evenly distribution or not. The table chunks would use evenly calculation optimization when the data distribution is even, and the query for splitting would happen when it is uneven. The distribution factor could be calculated by (MAX(id) - MIN(id) + 1) / rowCount.
chunk-key.even-distribution.factor.upper-boundoptional1000.0dDoubleThe upper bound of chunk key distribution factor. The distribution factor is used to determine whether the table is evenly distribution or not. The table chunks would use evenly calculation optimization when the data distribution is even, and the query for splitting would happen when it is uneven. The distribution factor could be calculated by (MAX(id) - MIN(id) + 1) / rowCount.
connection.pool.sizeoptional20IntegerThe connection pool size.

Limitation

Can’t perform checkpoint during scanning snapshot of tables

During scanning snapshot of database tables, since there is no recoverable position, we can’t perform checkpoints. In order to not perform checkpoints, Oracle CDC source will keep the checkpoint waiting to timeout. The timeout checkpoint will be recognized as failed checkpoint, by default, this will trigger a failover for the Flink job. So if the database table is large, it is recommended to add following Flink configurations to avoid failover because of the timeout checkpoints:

  1. execution.checkpointing.interval: 10min
  2. execution.checkpointing.tolerable-failed-checkpoints: 100
  3. restart-strategy: fixed-delay
  4. restart-strategy.fixed-delay.attempts: 2147483647

Available Metadata

The following format metadata can be exposed as read-only (VIRTUAL) columns in a table definition.

KeyDataTypeDescription
table_nameSTRING NOT NULLName of the table that contain the row.
schema_nameSTRING NOT NULLName of the schema that contain the row.
database_nameSTRING NOT NULLName of the database that contain the row.
op_tsTIMESTAMP_LTZ(3) NOT NULLIt indicates the time that the change was made in the database.
If the record is read from snapshot of the table instead of the change stream, the value is always 0.
meta.table_nameSTRING NOT NULLName of the table that contain the row.
meta.schema_nameSTRING NOT NULLName of the schema that contain the row.
meta.database_nameSTRING NOT NULLName of the database that contain the row.
meta.op_tsTIMESTAMP_LTZ(3) NOT NULLIt indicates the time that the change was made in the database.
If the record is read from snapshot of the table instead of the change stream, the value is always 0.
meta.op_typeSTRINGType of database operation, such as INSERT/DELETE, etc.
meta.data_canalSTRING/BYTESData for rows in canal-json format only exists when the source.multiple.enable option is ‘true’.
meta.is_ddlBOOLEANWhether the DDL statement.
meta.tsTIMESTAMP_LTZ(3) NOT NULLThe current time when the row was received and processed.
meta.sql_typeMAPMapping of sql_type table fields to java data type IDs.
meta.oracle_typeMAPStructure of the table.
meta.pk_namesARRAYPrimay key name of the table.

The extended CREATE TABLE example demonstrates the syntax for exposing these metadata fields:

  1. CREATE TABLE products (
  2. db_name STRING METADATA FROM 'database_name' VIRTUAL,
  3. schema_name STRING METADATA FROM 'schema_name' VIRTUAL,
  4. table_name STRING METADATA FROM 'table_name' VIRTUAL,
  5. op_ts TIMESTAMP_LTZ(3) METADATA FROM 'op_ts' VIRTUAL,
  6. meta_db_name STRING METADATA FROM 'meta.database_name' VIRTUAL,
  7. meta_schema_name STRING METADATA FROM 'meta.schema_name' VIRTUAL,
  8. meta_table_name STRING METADATA FROM 'meta.table_name' VIRTUAL,
  9. meat_op_ts TIMESTAMP_LTZ(3) METADATA FROM 'meta.op_ts' VIRTUAL,
  10. meta_op_type STRING METADATA FROM 'meta.op_type' VIRTUAL,
  11. meta_data_canal STRING METADATA FROM 'meta.data_canal' VIRTUAL,
  12. meta_is_ddl BOOLEAN METADATA FROM 'meta.is_ddl' VIRTUAL,
  13. meta_ts TIMESTAMP_LTZ(3) METADATA FROM 'meta.ts' VIRTUAL,
  14. meta_sql_type MAP<STRING, INT> METADATA FROM 'meta.sql_type' VIRTUAL,
  15. meat_oracle_type MAP<STRING, STRING> METADATA FROM 'meta.oracle_type' VIRTUAL,
  16. meta_pk_names ARRAY<STRING> METADATA FROM 'meta.pk_names' VIRTUAL
  17. ID INT NOT NULL,
  18. NAME STRING,
  19. DESCRIPTION STRING,
  20. WEIGHT DECIMAL(10, 3),
  21. PRIMARY KEY(id) NOT ENFORCED
  22. ) WITH (
  23. 'connector' = 'oracle-cdc-inlong',
  24. 'hostname' = 'localhost',
  25. 'port' = '1521',
  26. 'username' = 'flinkuser',
  27. 'password' = 'flinkpw',
  28. 'database-name' = 'XE',
  29. 'schema-name' = 'inventory',
  30. 'table-name' = 'inventory.products'
  31. );

Note : The Oracle dialect is case-sensitive, it converts field name to uppercase if the field name is not quoted, Flink SQL doesn’t convert the field name. Thus for physical columns from oracle database, we should use its converted field name in Oracle when define an oracle-cdc table in Flink SQL.

Features

Exactly-Once Processing

The Oracle Extract Node is a Flink Source connector which will read database snapshot first and then continues to read change events with exactly-once processing even failures happen. Please read How the connector works.

Startup Reading Position

The config option scan.startup.mode specifies the startup mode for Oracle CDC consumer. The valid enumerations are:

  • initial (default): Performs an initial snapshot on the monitored database tables upon first startup, and continue to read the latest redo log.
  • latest-offset: Never to perform a snapshot on the monitored database tables upon first startup, just read from the change since the connector was started.

Note: the mechanism of scan.startup.mode option relying on Debezium’s snapshot.mode configuration. So please do not use them together. If you specific both scan.startup.mode and debezium.snapshot.mode options in the table DDL, it may make scan.startup.mode doesn’t work.

Single Thread Reading

The Oracle Extract Node can’t work in parallel reading, because there is only one task can receive change events.

Whole Database, Multiple Schemas, Multiple Tables Migration

Oracle Extract Node supports the whole database, multiple schemas, multiple tables migration function. When you enable this function, Oracle Extract Node will compress the physical field of the table into a special meta field ‘data_canal’ in the format of ‘canal json’.

config options:

OptionRequiredDefaultTypeDescription
source.multiple.enableoptionalfalseStringSpecify ‘source.multiple.enable’ = ‘true’ to enable the whole database, multiple schemas, multiple tables migration function
schema-namerequired(none)StringSchema name of the Oracle database to monitor. If you want to capture multiple schemas, you can use commas to separate them. For example: ‘schema-name’ = ‘SCHEMA1,SCHEMA2’
table-namerequired(none)StringTable name of the Oracle database to monitor. If you want to capture multiple tables, you can use commas to separate them. For example: ‘table-name’ = ‘SCHEMA1.TB.*, SCHEMA2.TB1’

The CREATE TABLE example demonstrates the syntax of this function:

  1. CREATE TABLE node(
  2. data STRING METADATA FROM 'meta.data_canal' VIRTUAL)
  3. WITH (
  4. 'connector' = 'oracle-cdc-inlong',
  5. 'hostname' = 'localhost',
  6. 'port' = '1521',
  7. 'username' = 'flinkuser',
  8. 'password' = 'flinkpw',
  9. 'database-name' = 'XE',
  10. 'schema-name' = 'inventory',
  11. 'table-name' = 'inventory..*',
  12. 'source.multiple.enable' = 'true'
  13. )

Data Type Mapping

Oracle typeFlink SQL type
NUMBER(p, s <= 0), p - s < 3TINYINT
NUMBER(p, s <= 0), p - s < 5SMALLINT
NUMBER(p, s <= 0), p - s < 10INT
NUMBER(p, s <= 0), p - s < 19BIGINT
NUMBER(p, s <= 0), 19 <= p - s <= 38
DECIMAL(p - s, 0)
NUMBER(p, s > 0)DECIMAL(p, s)
NUMBER(p, s <= 0), p - s > 38STRING
FLOAT
BINARY_FLOAT
FLOAT
DOUBLE PRECISION
BINARY_DOUBLE
DOUBLE
NUMBER(1)BOOLEAN
DATE
TIMESTAMP [(p)]
TIMESTAMP [(p)][WITHOUT TIMEZONE]
TIMESTAMP [(p)] WITH TIME ZONETIMESTAMP [(p)] WITH TIME ZONE
TIMESTAMP [(p)] WITH LOCAL TIME ZONETIMESTAMP_LTZ [(p)]
CHAR(n)
NCHAR(n)
NVARCHAR2(n)
VARCHAR(n)
VARCHAR2(n)
CLOB
NCLOB
XMLType
STRING
BLOB
ROWID
BYTES
INTERVAL DAY TO SECOND
INTERVAL YEAR TO MONTH
BIGINT