数据类型

Flink SQL 为用户提供了一系列丰富的原始数据类型。

数据类型

在 Flink 的 Table 生态系统中,数据类型 描述了数据的逻辑类型,可以用来表示转换过程中输入、输出的类型。

Flink 的数据类型类似于 SQL 标准中的术语数据类型,但包含了值的可空性,以便于更好地处理标量表达式。

以下是一些数据类型的例子:

  • INT
  • INT NOT NULL
  • INTERVAL DAY TO SECOND(3)
  • ROW<myField ARRAY<BOOLEAN>, myOtherField TIMESTAMP(3)>

可在下文中找到所有预先定义好的数据类型。

Table API 中的数据类型

Java/Scala

在定义 connector、catalog、用户自定义函数时,使用 JVM 相关 API 的用户可能会使用到 Table API 中基于 org.apache.flink.table.types.DataType 的一些实例。

数据类型 实例有两个职责:

  • 作为逻辑类型的表现形式,定义 JVM 类语言或 Python 语言与 Table 生态系统的边界,而不是以具体的物理表现形式存在于数据的传输过程或存储中。
  • 可选的: 在与其他 API 进行数据交换时,为 Planner 提供这些数据物理层面的相关提示

对于基于 JVM 的语言,所有预定义的数据类型都可以在 org.apache.flink.table.api.DataTypes 下找到。

Python

在 Python 语言定义用户自定义函数时,使用 Python API 的用户 可能会使用到 Python API 中基于 pyflink.table.types.DataType 的一些实例。

数据类型 实例有如下职责:

  • 作为逻辑类型的表现形式,定义 JVM 类语言或 Python 语言与 Table 生态系统的边界,而不是以具体的物理表现形式存在于数据的传输过程或存储中。

对于 Python 语言,这些类型可以在 pyflink.table.types.DataTypes 下找到。

Java

使用 Table API 编程时,建议使用星号引入所有相关依赖,以获得更流畅的 API 使用体验:

  1. import static org.apache.flink.table.api.DataTypes.*;
  2. DataType t = INTERVAL(DAY(), SECOND(3));

Scala

使用 Table API 编程时,建议使用星号引入所有相关依赖,以获得更流畅的 API 使用体验:

  1. import org.apache.flink.table.api.DataTypes._
  2. val t: DataType = INTERVAL(DAY(), SECOND(3))

Python

  1. from pyflink.table.types import DataTypes
  2. t = DataTypes.INTERVAL(DataTypes.DAY(), DataTypes.SECOND(3))

物理提示

在Table 生态系统中,当需要将 SQL 中的数据类型对应到实际编程语言中的数据类型时,就需要有物理提示。物理提示明确了对应过程中应该使用哪种数据格式。

比如,在 source 端产生数据时,可以规定:TIMESTAMP 的逻辑类型,在底层要使用 java.sql.Timestamp 这个类表示,而不是使用默认的 java.time.LocalDateTime 类。有了物理提示,可以帮助 Flink 运行时根据提供的类将数据转换为其内部数据格式。同样在 sink 端,定义好数据格式,以便能从 Flink 运行时获取、转换数据。

下面的例子展示了如何声明一个桥接转换类:

Java

  1. // 告诉 Flink 运行时使用 java.sql.Timestamp 处理数据,而不是 java.time.LocalDateTime
  2. DataType t = DataTypes.TIMESTAMP(3).bridgedTo(java.sql.Timestamp.class);
  3. // 告诉 Flink 运行时使用基本的 int 数组来处理数据,而不是用包装类 Integer 数组
  4. DataType t = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(int[].class);

Scala

  1. // 告诉 Flink 运行时使用 java.sql.Timestamp 处理数据,而不是 java.time.LocalDateTime
  2. val t: DataType = DataTypes.TIMESTAMP(3).bridgedTo(classOf[java.sql.Timestamp])
  3. // 告诉 Flink 运行时使用基本的 int 数组来处理数据,而不是用包装类 Integer 数组
  4. val t: DataType = DataTypes.ARRAY(DataTypes.INT().notNull()).bridgedTo(classOf[Array[Int]])

注意 请记住,只有在扩展 API 时才需要使用到物理提示。使用预定义的 source、sink 以及 Flink 函数时,不需要用到物理提示。在使用 Table API 编写程序时,Flink 会忽略物理提示(例如 field.cast(TIMESTAMP(3).bridgedTo(Timestamp.class)))。

List of Data Types

This section lists all pre-defined data types.

Java/Scala

For the JVM-based Table API those types are also available in org.apache.flink.table.api.DataTypes.

Python

For the Python Table API, those types are available in pyflink.table.types.DataTypes.

The default planner supports the following set of SQL types:

Data TypeRemarks for Data Type
CHAR
VARCHAR
STRING
BOOLEAN
BINARY
VARBINARY
BYTES
DECIMALSupports fixed precision and scale.
TINYINT
SMALLINT
INTEGER
BIGINT
FLOAT
DOUBLE
DATE
TIMESupports only a precision of 0.
TIMESTAMP
TIMESTAMP_LTZ
INTERVALSupports only interval of MONTH and SECOND(3).
ARRAY
MULTISET
MAP
ROW
RAW
Structured typesOnly exposed in user-defined functions yet.

Character Strings

CHAR

Data type of a fixed-length character string.

Declaration

SQL

  1. CHAR
  2. CHAR(n)

Java/Scala

  1. DataTypes.CHAR(n)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.StringXXDefault
byte[]XXAssumes UTF-8 encoding.
org.apache.flink.table.data.StringDataXXInternal data structure.

Python

  1. Not supported.

The type can be declared using CHAR(n) where n is the number of code points. n must have a value between 1 and 2,147,483,647 (both inclusive). If no length is specified, n is equal to 1.

VARCHAR / STRING

Data type of a variable-length character string.

Declaration

SQL

  1. VARCHAR
  2. VARCHAR(n)
  3. STRING

Java/Scala

  1. DataTypes.VARCHAR(n)
  2. DataTypes.STRING()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.StringXXDefault
byte[]XXAssumes UTF-8 encoding.
org.apache.flink.table.data.StringDataXXInternal data structure.

Python

  1. DataTypes.VARCHAR(n)
  2. DataTypes.STRING()

Attention The specified maximum number of code points n in DataTypes.VARCHAR(n) must be 2,147,483,647 currently.

The type can be declared using VARCHAR(n) where n is the maximum number of code points. n must have a value between 1 and 2,147,483,647 (both inclusive). If no length is specified, n is equal to 1.

STRING is a synonym for VARCHAR(2147483647).

Binary Strings

BINARY

Data type of a fixed-length binary string (=a sequence of bytes).

Declaration

SQL

  1. BINARY
  2. BINARY(n)

Java/Scala

  1. DataTypes.BINARY(n)

Bridging to JVM Types

Java TypeInputOutputRemarks
byte[]XXDefault

Python

  1. Not supported.

The type can be declared using BINARY(n) where n is the number of bytes. n must have a value between 1 and 2,147,483,647 (both inclusive). If no length is specified, n is equal to 1.

VARBINARY / BYTES

Data type of a variable-length binary string (=a sequence of bytes).

Declaration

SQL

  1. VARBINARY
  2. VARBINARY(n)
  3. BYTES

Java/Scala

  1. DataTypes.VARBINARY(n)
  2. DataTypes.BYTES()

Bridging to JVM Types

Java TypeInputOutputRemarks
byte[]XXDefault

Python

  1. DataTypes.VARBINARY(n)
  2. DataTypes.BYTES()

Attention The specified maximum number of bytes n in DataTypes.VARBINARY(n) must be 2,147,483,647 currently.

The type can be declared using VARBINARY(n) where n is the maximum number of bytes. n must have a value between 1 and 2,147,483,647 (both inclusive). If no length is specified, n is equal to 1.

BYTES is a synonym for VARBINARY(2147483647).

Exact Numerics

DECIMAL

Data type of a decimal number with fixed precision and scale.

Declaration

SQL

  1. DECIMAL
  2. DECIMAL(p)
  3. DECIMAL(p, s)
  4. DEC
  5. DEC(p)
  6. DEC(p, s)
  7. NUMERIC
  8. NUMERIC(p)
  9. NUMERIC(p, s)

Java/Scala

  1. DataTypes.DECIMAL(p, s)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.math.BigDecimalXXDefault
org.apache.flink.table.data.DecimalDataXXInternal data structure.

Python

  1. DataTypes.DECIMAL(p, s)

Attention The precision and scale specified in DataTypes.DECIMAL(p, s) must be 38 and 18 separately currently.

The type can be declared using DECIMAL(p, s) where p is the number of digits in a number (precision) and s is the number of digits to the right of the decimal point in a number (scale). p must have a value between 1 and 38 (both inclusive). s must have a value between 0 and p (both inclusive). The default value for p is 10. The default value for s is 0.

NUMERIC(p, s) and DEC(p, s) are synonyms for this type.

TINYINT

Data type of a 1-byte signed integer with values from -128 to 127.

Declaration

SQL

  1. TINYINT

Java/Scala

  1. DataTypes.TINYINT()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.ByteXXDefault
byteX(X)Output only if type is not nullable.

Python

  1. DataTypes.TINYINT()

SMALLINT

Data type of a 2-byte signed integer with values from -32,768 to 32,767.

Declaration

SQL

  1. SMALLINT

Java/Scala

  1. DataTypes.SMALLINT()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.ShortXXDefault
shortX(X)Output only if type is not nullable.

Python

  1. DataTypes.SMALLINT()

INT

Data type of a 4-byte signed integer with values from -2,147,483,648 to 2,147,483,647.

Declaration

SQL

  1. INT
  2. INTEGER

Java/Scala

  1. DataTypes.INT()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.IntegerXXDefault
intX(X)Output only if type is not nullable.

Python

  1. DataTypes.INT()

INTEGER is a synonym for this type.

BIGINT

Data type of an 8-byte signed integer with values from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.

Declaration

SQL

  1. BIGINT

Java/Scala

  1. DataTypes.BIGINT()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.LongXXDefault
longX(X)Output only if type is not nullable.

Python

  1. DataTypes.BIGINT()

Approximate Numerics

FLOAT

Data type of a 4-byte single precision floating point number.

Compared to the SQL standard, the type does not take parameters.

Declaration

SQL

  1. FLOAT

Java/Scala

  1. DataTypes.FLOAT()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.FloatXXDefault
floatX(X)Output only if type is not nullable.

Python

  1. DataTypes.FLOAT()

DOUBLE

Data type of an 8-byte double precision floating point number.

Declaration

SQL

  1. DOUBLE
  2. DOUBLE PRECISION

Java/Scala

  1. DataTypes.DOUBLE()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.DoubleXXDefault
doubleX(X)Output only if type is not nullable.

Python

  1. DataTypes.DOUBLE()

DOUBLE PRECISION is a synonym for this type.

Date and Time

DATE

Data type of a date consisting of year-month-day with values ranging from 0000-01-01 to 9999-12-31.

Compared to the SQL standard, the range starts at year 0000.

Declaration

SQL

  1. DATE

Java/Scala

  1. DataTypes.DATE()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.LocalDateXXDefault
java.sql.DateXX
java.lang.IntegerXXDescribes the number of days since epoch.
intX(X)Describes the number of days since epoch.
Output only if type is not nullable.

Python

  1. DataTypes.DATE()

TIME

Data type of a time without time zone consisting of hour:minute:second[.fractional] with up to nanosecond precision and values ranging from 00:00:00.000000000 to 23:59:59.999999999.

SQL/Java/Scala

Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not supported as the semantics are closer to java.time.LocalTime. A time with time zone is not provided.

Python

Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not supported. A time with time zone is not provided.

Declaration

SQL

  1. TIME
  2. TIME(p)

Java/Scala

  1. DataTypes.TIME(p)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.LocalTimeXXDefault
java.sql.TimeXX
java.lang.IntegerXXDescribes the number of milliseconds of the day.
intX(X)Describes the number of milliseconds of the day.
Output only if type is not nullable.
java.lang.LongXXDescribes the number of nanoseconds of the day.
longX(X)Describes the number of nanoseconds of the day.
Output only if type is not nullable.

Python

  1. DataTypes.TIME(p)

Attention The precision specified in DataTypes.TIME(p) must be 0 currently.

The type can be declared using TIME(p) where p is the number of digits of fractional seconds (precision). p must have a value between 0 and 9 (both inclusive). If no precision is specified, p is equal to 0.

TIMESTAMP

Data type of a timestamp without time zone consisting of year-month-day hour:minute:second[.fractional] with up to nanosecond precision and values ranging from 0000-01-01 00:00:00.000000000 to 9999-12-31 23:59:59.999999999.

SQL/Java/Scala

Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not supported as the semantics are closer to java.time.LocalDateTime.

A conversion from and to BIGINT (a JVM long type) is not supported as this would imply a time zone. However, this type is time zone free. For more java.time.Instant-like semantics use TIMESTAMP_LTZ.

Python

Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not supported.

A conversion from and to BIGINT is not supported as this would imply a time zone. However, this type is time zone free. If you have such a requirement please use TIMESTAMP_LTZ.

Declaration

SQL

  1. TIMESTAMP
  2. TIMESTAMP(p)
  3. TIMESTAMP WITHOUT TIME ZONE
  4. TIMESTAMP(p) WITHOUT TIME ZONE

Java/Scala

  1. DataTypes.TIMESTAMP(p)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.LocalDateTimeXXDefault
java.sql.TimestampXX
org.apache.flink.table.data.TimestampDataXXInternal data structure.

Python

  1. DataTypes.TIMESTAMP(p)

Attention The precision specified in DataTypes.TIMESTAMP(p) must be 3 currently.

The type can be declared using TIMESTAMP(p) where p is the number of digits of fractional seconds (precision). p must have a value between 0 and 9 (both inclusive). If no precision is specified, p is equal to 6.

TIMESTAMP(p) WITHOUT TIME ZONE is a synonym for this type.

TIMESTAMP WITH TIME ZONE

Data type of a timestamp with time zone consisting of year-month-day hour:minute:second[.fractional] zone with up to nanosecond precision and values ranging from 0000-01-01 00:00:00.000000000 +14:59 to 9999-12-31 23:59:59.999999999 -14:59.

SQL/Java/Scala

Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not supported as the semantics are closer to java.time.OffsetDateTime.

Python

Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not supported.

Compared to TIMESTAMP_LTZ, the time zone offset information is physically stored in every datum. It is used individually for every computation, visualization, or communication to external systems.

Declaration

SQL

  1. TIMESTAMP WITH TIME ZONE
  2. TIMESTAMP(p) WITH TIME ZONE

Java/Scala

  1. DataTypes.TIMESTAMP_WITH_TIME_ZONE(p)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.OffsetDateTimeXXDefault
java.time.ZonedDateTimeXIgnores the zone ID.

Python

  1. Not supported.

SQL/Java/Scala

The type can be declared using TIMESTAMP(p) WITH TIME ZONE where p is the number of digits of fractional seconds (precision). p must have a value between 0 and 9 (both inclusive). If no precision is specified, p is equal to 6.

Python

TIMESTAMP_LTZ

Data type of a timestamp with local time zone consisting of year-month-day hour:minute:second[.fractional] zone with up to nanosecond precision and values ranging from 0000-01-01 00:00:00.000000000 +14:59 to 9999-12-31 23:59:59.999999999 -14:59.

SQL/Java/Scala

Leap seconds (23:59:60 and 23:59:61) are not supported as the semantics are closer to java.time.OffsetDateTime.

Compared to TIMESTAMP WITH TIME ZONE, the time zone offset information is not stored physically in every datum. Instead, the type assumes java.time.Instant semantics in UTC time zone at the edges of the table ecosystem. Every datum is interpreted in the local time zone configured in the current session for computation and visualization.

Python

Leap seconds (23:59:60 and 23:59:61) are not supported.

Compared to TIMESTAMP WITH TIME ZONE, the time zone offset information is not stored physically in every datum. Every datum is interpreted in the local time zone configured in the current session for computation and visualization.

This type fills the gap between time zone free and time zone mandatory timestamp types by allowing the interpretation of UTC timestamps according to the configured session time zone.

Declaration

SQL

  1. TIMESTAMP_LTZ
  2. TIMESTAMP_LTZ(p)
  3. TIMESTAMP WITH LOCAL TIME ZONE
  4. TIMESTAMP(p) WITH LOCAL TIME ZONE

Java/Scala

  1. DataTypes.TIMESTAMP_LTZ(p)
  2. DataTypes.TIMESTAMP_WITH_LOCAL_TIME_ZONE(p)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.InstantXXDefault
java.lang.IntegerXXDescribes the number of seconds since epoch.
intX(X)Describes the number of seconds since epoch.
Output only if type is not nullable.
java.lang.LongXXDescribes the number of milliseconds since epoch.
longX(X)Describes the number of milliseconds since epoch.
Output only if type is not nullable.
java.sql.TimestampXXDescribes the number of milliseconds since epoch.
org.apache.flink.table.data.TimestampDataXXInternal data structure.

Python

  1. DataTypes.TIMESTAMP_LTZ(p)
  2. DataTypes.TIMESTAMP_WITH_LOCAL_TIME_ZONE(p)

Attention The precision specified in DataTypes.TIMESTAMP_LTZ(p) must be 3 currently.

The type can be declared using TIMESTAMP_LTZ(p) where p is the number of digits of fractional seconds (precision). p must have a value between 0 and 9 (both inclusive). If no precision is specified, p is equal to 6.

TIMESTAMP(p) WITH LOCAL TIME ZONE is a synonym for this type.

INTERVAL YEAR TO MONTH

Data type for a group of year-month interval types.

The type must be parameterized to one of the following resolutions:

  • interval of years,
  • interval of years to months,
  • or interval of months.

An interval of year-month consists of +years-months with values ranging from -9999-11 to +9999-11.

The value representation is the same for all types of resolutions. For example, an interval of months of 50 is always represented in an interval-of-years-to-months format (with default year precision): +04-02.

Declaration

SQL

  1. INTERVAL YEAR
  2. INTERVAL YEAR(p)
  3. INTERVAL YEAR(p) TO MONTH
  4. INTERVAL MONTH

Java/Scala

  1. DataTypes.INTERVAL(DataTypes.YEAR())
  2. DataTypes.INTERVAL(DataTypes.YEAR(p))
  3. DataTypes.INTERVAL(DataTypes.YEAR(p), DataTypes.MONTH())
  4. DataTypes.INTERVAL(DataTypes.MONTH())

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.PeriodXXIgnores the days part. Default
java.lang.IntegerXXDescribes the number of months.
intX(X)Describes the number of months.
Output only if type is not nullable.

Python

  1. DataTypes.INTERVAL(DataTypes.YEAR())
  2. DataTypes.INTERVAL(DataTypes.YEAR(p))
  3. DataTypes.INTERVAL(DataTypes.YEAR(p), DataTypes.MONTH())
  4. DataTypes.INTERVAL(DataTypes.MONTH())

The type can be declared using the above combinations where p is the number of digits of years (year precision). p must have a value between 1 and 4 (both inclusive). If no year precision is specified, p is equal to 2.

INTERVAL DAY TO SECOND

Data type for a group of day-time interval types.

The type must be parameterized to one of the following resolutions with up to nanosecond precision:

  • interval of days,
  • interval of days to hours,
  • interval of days to minutes,
  • interval of days to seconds,
  • interval of hours,
  • interval of hours to minutes,
  • interval of hours to seconds,
  • interval of minutes,
  • interval of minutes to seconds,
  • or interval of seconds.

An interval of day-time consists of +days hours:months:seconds.fractional with values ranging from -999999 23:59:59.999999999 to +999999 23:59:59.999999999. The value representation is the same for all types of resolutions. For example, an interval of seconds of 70 is always represented in an interval-of-days-to-seconds format (with default precisions): +00 00:01:10.000000.

Declaration

SQL

  1. INTERVAL DAY
  2. INTERVAL DAY(p1)
  3. INTERVAL DAY(p1) TO HOUR
  4. INTERVAL DAY(p1) TO MINUTE
  5. INTERVAL DAY(p1) TO SECOND(p2)
  6. INTERVAL HOUR
  7. INTERVAL HOUR TO MINUTE
  8. INTERVAL HOUR TO SECOND(p2)
  9. INTERVAL MINUTE
  10. INTERVAL MINUTE TO SECOND(p2)
  11. INTERVAL SECOND
  12. INTERVAL SECOND(p2)

Java/Scala

  1. DataTypes.INTERVAL(DataTypes.DAY())
  2. DataTypes.INTERVAL(DataTypes.DAY(p1))
  3. DataTypes.INTERVAL(DataTypes.DAY(p1), DataTypes.HOUR())
  4. DataTypes.INTERVAL(DataTypes.DAY(p1), DataTypes.MINUTE())
  5. DataTypes.INTERVAL(DataTypes.DAY(p1), DataTypes.SECOND(p2))
  6. DataTypes.INTERVAL(DataTypes.HOUR())
  7. DataTypes.INTERVAL(DataTypes.HOUR(), DataTypes.MINUTE())
  8. DataTypes.INTERVAL(DataTypes.HOUR(), DataTypes.SECOND(p2))
  9. DataTypes.INTERVAL(DataTypes.MINUTE())
  10. DataTypes.INTERVAL(DataTypes.MINUTE(), DataTypes.SECOND(p2))
  11. DataTypes.INTERVAL(DataTypes.SECOND())
  12. DataTypes.INTERVAL(DataTypes.SECOND(p2))

Bridging to JVM Types

Java TypeInputOutputRemarks
java.time.DurationXXDefault
java.lang.LongXXDescribes the number of milliseconds.
longX(X)Describes the number of milliseconds.
Output only if type is not nullable.

Python

  1. DataTypes.INTERVAL(DataTypes.DAY())
  2. DataTypes.INTERVAL(DataTypes.DAY(p1))
  3. DataTypes.INTERVAL(DataTypes.DAY(p1), DataTypes.HOUR())
  4. DataTypes.INTERVAL(DataTypes.DAY(p1), DataTypes.MINUTE())
  5. DataTypes.INTERVAL(DataTypes.DAY(p1), DataTypes.SECOND(p2))
  6. DataTypes.INTERVAL(DataTypes.HOUR())
  7. DataTypes.INTERVAL(DataTypes.HOUR(), DataTypes.MINUTE())
  8. DataTypes.INTERVAL(DataTypes.HOUR(), DataTypes.SECOND(p2))
  9. DataTypes.INTERVAL(DataTypes.MINUTE())
  10. DataTypes.INTERVAL(DataTypes.MINUTE(), DataTypes.SECOND(p2))
  11. DataTypes.INTERVAL(DataTypes.SECOND())
  12. DataTypes.INTERVAL(DataTypes.SECOND(p2))

The type can be declared using the above combinations where p1 is the number of digits of days (day precision) and p2 is the number of digits of fractional seconds (fractional precision). p1 must have a value between 1 and 6 (both inclusive). p2 must have a value between 0 and 9 (both inclusive). If no p1 is specified, it is equal to 2 by default. If no p2 is specified, it is equal to 6 by default.

Constructured Data Types

ARRAY

Data type of an array of elements with same subtype.

Compared to the SQL standard, the maximum cardinality of an array cannot be specified but is fixed at 2,147,483,647. Also, any valid type is supported as a subtype.

Declaration

SQL

  1. ARRAY<t>
  2. t ARRAY

Java/Scala

  1. DataTypes.ARRAY(t)

Bridging to JVM Types

Java TypeInputOutputRemarks
t[](X)(X)Depends on the subtype. Default
java.util.List<t>XX
subclass of java.util.List<t>X
org.apache.flink.table.data.ArrayDataXXInternal data structure.

Python

  1. DataTypes.ARRAY(t)

The type can be declared using ARRAY<t> where t is the data type of the contained elements.

t ARRAY is a synonym for being closer to the SQL standard. For example, INT ARRAY is equivalent to ARRAY<INT>.

MAP

Data type of an associative array that maps keys (including NULL) to values (including NULL). A map cannot contain duplicate keys; each key can map to at most one value.

There is no restriction of element types; it is the responsibility of the user to ensure uniqueness.

The map type is an extension to the SQL standard.

Declaration

SQL

  1. MAP<kt, vt>

Java/Scala

  1. DataTypes.MAP(kt, vt)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.util.Map<kt, vt>XXDefault
subclass of java.util.Map<kt, vt>X
org.apache.flink.table.data.MapDataXXInternal data structure.

Python

  1. DataTypes.MAP(kt, vt)

The type can be declared using MAP<kt, vt> where kt is the data type of the key elements and vt is the data type of the value elements.

MULTISET

Data type of a multiset (=bag). Unlike a set, it allows for multiple instances for each of its elements with a common subtype. Each unique value (including NULL) is mapped to some multiplicity.

There is no restriction of element types; it is the responsibility of the user to ensure uniqueness.

Declaration

SQL

  1. MULTISET<t>
  2. t MULTISET

Java/Scala

  1. DataTypes.MULTISET(t)

Bridging to JVM Types

Java TypeInputOutputRemarks
java.util.Map<t, java.lang.Integer>XXAssigns each value to an integer multiplicity. Default
subclass of java.util.Map<t, java.lang.Integer>>X
org.apache.flink.table.data.MapDataXXInternal data structure.

Python

  1. DataTypes.MULTISET(t)

The type can be declared using MULTISET<t> where t is the data type of the contained elements.

t MULTISET is a synonym for being closer to the SQL standard. For example, INT MULTISET is equivalent to MULTISET<INT>.

ROW

Data type of a sequence of fields.

A field consists of a field name, field type, and an optional description. The most specific type of a row of a table is a row type. In this case, each column of the row corresponds to the field of the row type that has the same ordinal position as the column.

Compared to the SQL standard, an optional field description simplifies the handling with complex structures.

A row type is similar to the STRUCT type known from other non-standard-compliant frameworks.

Declaration

SQL

  1. ROW<n0 t0, n1 t1, ...>
  2. ROW<n0 t0 'd0', n1 t1 'd1', ...>
  3. ROW(n0 t0, n1 t1, ...>
  4. ROW(n0 t0 'd0', n1 t1 'd1', ...)

Java/Scala

  1. DataTypes.ROW(DataTypes.FIELD(n0, t0), DataTypes.FIELD(n1, t1), ...)
  2. DataTypes.ROW(DataTypes.FIELD(n0, t0, d0), DataTypes.FIELD(n1, t1, d1), ...)

Bridging to JVM Types

Java TypeInputOutputRemarks
org.apache.flink.types.RowXXDefault
org.apache.flink.table.data.RowDataXXInternal data structure.

Python

  1. DataTypes.ROW([DataTypes.FIELD(n0, t0), DataTypes.FIELD(n1, t1), ...])
  2. DataTypes.ROW([DataTypes.FIELD(n0, t0, d0), DataTypes.FIELD(n1, t1, d1), ...])

The type can be declared using ROW<n0 t0 'd0', n1 t1 'd1', ...> where n is the unique name of a field, t is the logical type of a field, d is the description of a field.

ROW(...) is a synonym for being closer to the SQL standard. For example, ROW(myField INT, myOtherField BOOLEAN) is equivalent to ROW<myField INT, myOtherField BOOLEAN>.

User-Defined Data Types

Java/Scala

Attention User-defined data types are not fully supported yet. They are currently (as of Flink 1.11) only exposed as unregistered structured types in parameters and return types of functions.

A structured type is similar to an object in an object-oriented programming language. It contains zero, one or more attributes. Each attribute consists of a name and a type.

There are two kinds of structured types:

  • Types that are stored in a catalog and are identified by a catalog identifier (like cat.db.MyType). Those are equal to the SQL standard definition of structured types.

  • Anonymously defined, unregistered types (usually reflectively extracted) that are identified by an implementation class (like com.myorg.model.MyType). Those are useful when programmatically defining a table program. They enable reusing existing JVM classes without manually defining the schema of a data type again.

Registered Structured Types

Currently, registered structured types are not supported. Thus, they cannot be stored in a catalog or referenced in a CREATE TABLE DDL.

Unregistered Structured Types

Unregistered structured types can be created from regular POJOs (Plain Old Java Objects) using automatic reflective extraction.

The implementation class of a structured type must meet the following requirements:

  • The class must be globally accessible which means it must be declared public, static, and not abstract.
  • The class must offer a default constructor with zero arguments or a full constructor that assigns all fields.
  • All fields of the class must be readable by either public declaration or a getter that follows common coding style such as getField(), isField(), field().
  • All fields of the class must be writable by either public declaration, fully assigning constructor, or a setter that follows common coding style such as setField(...), field(...).
  • All fields must be mapped to a data type either implicitly via reflective extraction or explicitly using the @DataTypeHint annotations.
  • Fields that are declared static or transient are ignored.

The reflective extraction supports arbitrary nesting of fields as long as a field type does not (transitively) refer to itself.

The declared field class (e.g. public int age;) must be contained in the list of supported JVM bridging classes defined for every data type in this document (e.g. java.lang.Integer or int for INT).

For some classes an annotation is required in order to map the class to a data type (e.g. @DataTypeHint("DECIMAL(10, 2)") to assign a fixed precision and scale for java.math.BigDecimal).

Python

Declaration

Java

  1. class User {
  2. // extract fields automatically
  3. public int age;
  4. public String name;
  5. // enrich the extraction with precision information
  6. public @DataTypeHint("DECIMAL(10, 2)") BigDecimal totalBalance;
  7. // enrich the extraction with forcing using RAW types
  8. public @DataTypeHint("RAW") Class<?> modelClass;
  9. }
  10. DataTypes.of(User.class);

Bridging to JVM Types

Java TypeInputOutputRemarks
classXXOriginating class or subclasses (for input) or
superclasses (for output). Default
org.apache.flink.types.RowXXRepresent the structured type as a row.
org.apache.flink.table.data.RowDataXXInternal data structure.

Scala

  1. case class User(
  2. // extract fields automatically
  3. age: Int,
  4. name: String,
  5. // enrich the extraction with precision information
  6. @DataTypeHint("DECIMAL(10, 2)") totalBalance: java.math.BigDecimal,
  7. // enrich the extraction with forcing using a RAW type
  8. @DataTypeHint("RAW") modelClass: Class[_]
  9. )
  10. DataTypes.of(classOf[User])

Bridging to JVM Types

Java TypeInputOutputRemarks
classXXOriginating class or subclasses (for input) or
superclasses (for output). Default
org.apache.flink.types.RowXXRepresent the structured type as a row.
org.apache.flink.table.data.RowDataXXInternal data structure.

Python

  1. Not supported.

Other Data Types

BOOLEAN

Data type of a boolean with a (possibly) three-valued logic of TRUE, FALSE, and UNKNOWN.

Declaration

SQL

  1. BOOLEAN

Java/Scala

  1. DataTypes.BOOLEAN()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.BooleanXXDefault
booleanX(X)Output only if type is not nullable.

Python

  1. DataTypes.BOOLEAN()

RAW

Data type of an arbitrary serialized type. This type is a black box within the table ecosystem and is only deserialized at the edges.

The raw type is an extension to the SQL standard.

Declaration

SQL

  1. RAW('class', 'snapshot')

Java/Scala

  1. DataTypes.RAW(class, serializer)
  2. DataTypes.RAW(class)

Bridging to JVM Types

Java TypeInputOutputRemarks
classXXOriginating class or subclasses (for input) or
superclasses (for output). Default
byte[]X
org.apache.flink.table.data.RawValueDataXXInternal data structure.

Python

  1. Not supported.

SQL/Java/Scala

The type can be declared using RAW('class', 'snapshot') where class is the originating class and snapshot is the serialized TypeSerializerSnapshot in Base64 encoding. Usually, the type string is not declared directly but is generated while persisting the type.

In the API, the RAW type can be declared either by directly supplying a Class + TypeSerializer or by passing Class and letting the framework extract Class + TypeSerializer from there.

Python

NULL

Data type for representing untyped NULL values.

The null type is an extension to the SQL standard. A null type has no other value except NULL, thus, it can be cast to any nullable type similar to JVM semantics.

This type helps in representing unknown types in API calls that use a NULL literal as well as bridging to formats such as JSON or Avro that define such a type as well.

This type is not very useful in practice and is just mentioned here for completeness.

Declaration

SQL

  1. NULL

Java/Scala

  1. DataTypes.NULL()

Bridging to JVM Types

Java TypeInputOutputRemarks
java.lang.ObjectXXDefault
any class(X)Any non-primitive type.

Python

  1. Not supported.

CAST 方法

Flink Table API 和 Flink SQL 支持从 输入 数据类型 到 目标 数据类型的转换。有的转换 无论输入值是什么都能保证转换成功,而有些转换则会在运行时失败(即不可能转换为 目标 数据类型对应的值)。 例如,将 INT 数据类型的值转换为 STRING 数据类型一定能转换成功,但无法保证将 STRING 数据类型转换为 INT 数据类型。

在生成执行计划时,Flink 的 SQL 检查器会拒绝提交那些不可能直接转换为 目标 数据类型的SQL,并抛出 ValidationException 异常, 例如从 TIMESTAMP 类型转化到 INTERVAL 类型。 然而有些查询即使通过了 SQL 检查器的验证,依旧可能会在运行期间转换失败,这就需要用户正确处理这些失败了。

在 Flink Table API 和 Flink SQL 中,可以用下面两个内置方法来进行转换操作:

  • CAST:定义在 SQL 标准的 CAST 方法。在某些容易发生转换失败的查询场景中,当实际输入数据不合法时,作业便会运行失败。类型推导会保留输入类型的可空性。
  • TRY_CAST:常规 CAST 方法的扩展,当转换失败时返回 NULL。该方法的返回值允许为空。

例如:

  1. CAST('42' AS INT) --- 结果返回数字 42 INT 格式(非空)
  2. CAST(NULL AS VARCHAR) --- 结果返回 VARCHAR 类型的空值
  3. CAST('non-number' AS INT) --- 抛出异常,并停止作业
  4. TRY_CAST('42' AS INT) --- 结果返回数字 42 INT 格式
  5. TRY_CAST(NULL AS VARCHAR) --- 结果返回 VARCHAR 类型的空值
  6. TRY_CAST('non-number' AS INT) --- 结果返回 INT 类型的空值
  7. COALESCE(TRY_CAST('non-number' AS INT), 0) --- 结果返回数字 0 INT 格式(非空)

下表展示了各个类型的转换程度,“Y” 表示支持,”!” 表示转换可能会失败,“N” 表示不支持:

输入类型\目标类型CHAR¹/
VARCHAR¹/
STRING
BINARY¹/
VARBINARY¹/
BYTES
BOOLEANDECIMALTINYINTSMALLINTINTEGERBIGINTFLOATDOUBLEDATETIMETIMESTAMPTIMESTAMP_LTZINTERVALARRAYMULTISETMAPROWSTRUCTUREDRAW
CHAR/
VARCHAR/
STRING
Y!!!!!!!!!!!!!NNNNNNN
BINARY/
VARBINARY/
BYTES
YYNNNNNNNNNNNNNNNNNNN
BOOLEANYNYYYYYYYYNNNNNNNNNNN
DECIMALYNNYYYYYYYNNNNNNNNNNN
TINYINTYNYYYYYYYYNNNNNNNNN
SMALLINTYNYYYYYYYYNNNNNNNNN
INTEGERYNYYYYYYYYNNY⁵NNNNNN
BIGINTYNYYYYYYYYNNY⁶NNNNNN
FLOATYNNYYYYYYYNNNNNNNNNNN
DOUBLEYNNYYYYYYYNNNNNNNNNNN
DATEYNNNNNNNNNYNYYNNNNNNN
TIMEYNNNNNNNNNNYYYNNNNNNN
TIMESTAMPYNNNNNNNNNYYYYNNNNNNN
TIMESTAMP_LTZYNNNNNNNNNYYYYNNNNNNN
INTERVALYNNNNNY⁵Y⁶NNNNNNYNNNNNN
ARRAYYNNNNNNNNNNNNNNNNNNN
MULTISETYNNNNNNNNNNNNNNNNNNN
MAPYNNNNNNNNNNNNNNNNNNN
ROWYNNNNNNNNNNNNNNNNNNN
STRUCTUREDYNNNNNNNNNNNNNNNNNNN
RAWY!NNNNNNNNNNNNNNNNNNY⁴

备注:

  1. 所有转化到具有固长或变长的类型时会根据类型的定义来裁剪或填充数据。
  2. 使用 TO_TIMESTAMP 方法和 TO_TIMESTAMP_LTZ 方法的场景,不要使用 CASTTRY_CAST
  3. 支持转换,当且仅当用其内部数据结构也支持转化时。转换可能会失败,当且仅当用其内部数据结构也可能会转换失败。
  4. 支持转换,当且仅当用使用 RAW 的类和类的序列化器一样。
  5. 支持转换,当且仅当用使用 INTERVAL 做“月”到“年”的转换。
  6. 支持转换,当且仅当用使用 INTERVAL 做“天”到“时间”的转换。

请注意:无论是 CAST 还是 TRY_CAST,当输入为 NULL ,输出也为 NULL

旧版本 CAST 方法

用户可以通过将参数 table.exec.legacy-cast-behaviour 设置为 enabled 来启用 1.15 版本之前的 CAST 行为。 在 Flink 1.15 版本此参数默认为 disabled。

如果设置为 enabled,请注意以下问题:

  • 转换为 CHAR/VARCHAR/BINARY/VARBINARY 数据类型时,不再自动修剪(trim)或填充(pad)。
  • 使用 CAST 时不再会因为转化失败而停止作业,只会返回 NULL,但不会像 TRY_CAST 那样推断正确的类型。
  • CHAR/VARCHAR/STRING 的转换结果会有一些细微的差别。

我们 不建议 配置此参数,而是 强烈建议 在新项目中保持这个参数为默认禁用,以使用最新版本的 CAST 方法。 在下一个版本,这个参数会被移除。

数据类型提取

Java/Scala

在 API 中的很多地方,Flink 都尝试利用反射机制从类信息中自动提取数据类型,以避免重复地手动定义 schema。但是,通过反射提取数据类型并不总是有效的,因为有可能会缺失逻辑信息。因此,可能需要在类或字段声明的附近添加额外信息以支持提取逻辑。

下表列出了无需更多信息即可隐式映射到数据类型的类。

如果你打算在 Scala 中实现类,建议使用包装类型(例如 java.lang.Integer)而不是 Scala 的基本类型。如下表所示,Scala 的基本类型(例如 IntDouble)会被编译为 JVM 基本类型(例如 int/double)并产生 NOT NULL 语义。此外,在泛型中使用的 Scala 基本类型(例如 java.util.Map[Int, Double])在编译期间会被擦除,导致类信息类似于 java.util.Map[java.lang.Object, java.lang.Object]

数据类型
java.lang.StringSTRING
java.lang.BooleanBOOLEAN
booleanBOOLEAN NOT NULL
java.lang.ByteTINYINT
byteTINYINT NOT NULL
java.lang.ShortSMALLINT
shortSMALLINT NOT NULL
java.lang.IntegerINT
intINT NOT NULL
java.lang.LongBIGINT
longBIGINT NOT NULL
java.lang.FloatFLOAT
floatFLOAT NOT NULL
java.lang.DoubleDOUBLE
doubleDOUBLE NOT NULL
java.sql.DateDATE
java.time.LocalDateDATE
java.sql.TimeTIME(0)
java.time.LocalTimeTIME(9)
java.sql.TimestampTIMESTAMP(9)
java.time.LocalDateTimeTIMESTAMP(9)
java.time.OffsetDateTimeTIMESTAMP(9) WITH TIME ZONE
java.time.InstantTIMESTAMP_LTZ(9)
java.time.DurationINTERVAL SECOND(9)
java.time.PeriodINTERVAL YEAR(4) TO MONTH
byte[]BYTES
T[]ARRAY<T>
java.util.Map<K, V>MAP<K, V>
结构化类型 T匿名结构化类型 T

本文档中提到的其他 JVM 桥接类需要 @DataTypeHint 注释。

数据类型 hints 可以参数化或替换单个函数参数和返回类型、结构化类或结构化类的字段的默认提取逻辑。实现者可以通过声明 @DataTypeHint 注解来选择默认提取逻辑的修改程度。

@DataTypeHint 注解提供了一组可选的 hint 参数。其中一些参数如以下示例所示。更多信息可以在注解类的文档中找到。

Python

Java

  1. import org.apache.flink.table.annotation.DataTypeHint;
  2. class User {
  3. // 使用默认转换类 `java.lang.Integer` 定义 INT 数据类型
  4. public @DataTypeHint("INT") Object o;
  5. // 使用显式转换类定义毫秒精度的 TIMESTAMP 数据类型
  6. public @DataTypeHint(value = "TIMESTAMP(3)", bridgedTo = java.sql.Timestamp.class) Object o;
  7. // 通过强制使用 RAW 类型来丰富提取
  8. public @DataTypeHint("RAW") Class<?> modelClass;
  9. // 定义所有出现的 java.math.BigDecimal(包含嵌套字段)都将被提取为 DECIMAL(12, 2)
  10. public @DataTypeHint(defaultDecimalPrecision = 12, defaultDecimalScale = 2) AccountStatement stmt;
  11. // 定义当类型不能映射到数据类型时,总是将其视为 RAW 类型,而不是抛出异常
  12. public @DataTypeHint(allowRawGlobally = HintFlag.TRUE) ComplexModel model;
  13. }

Scala

  1. import org.apache.flink.table.annotation.DataTypeHint
  2. class User {
  3. // 使用默认转换类 `java.lang.Integer` 定义 INT 数据类型
  4. @DataTypeHint("INT")
  5. var o: AnyRef
  6. // 使用显式转换类定义毫秒精度的 TIMESTAMP 数据类型
  7. @DataTypeHint(value = "TIMESTAMP(3)", bridgedTo = java.sql.Timestamp.class)
  8. var o: AnyRef
  9. // 通过强制使用 RAW 类型来丰富提取
  10. @DataTypeHint("RAW")
  11. var modelClass: Class[_]
  12. // 定义所有出现的 java.math.BigDecimal(包含嵌套字段)都将被提取为 DECIMAL(12, 2)
  13. @DataTypeHint(defaultDecimalPrecision = 12, defaultDecimalScale = 2)
  14. var stmt: AccountStatement
  15. // 定义当类型不能映射到数据类型时,总是将其视为 RAW 类型,而不是抛出异常
  16. @DataTypeHint(allowRawGlobally = HintFlag.TRUE)
  17. var model: ComplexModel
  18. }

Python

  1. 不支持。