All Flink Scala APIs are deprecated and will be removed in a future Flink version. You can still build your application in Scala, but you should move to the Java version of either the DataStream and/or Table API.

See FLIP-265 Deprecate and remove Scala API support

Scala API 扩展

为了在 Scala 和 Java API 之间保持大致相同的使用体验,在批处理和流处理的标准 API 中省略了一些允许 Scala 高级表达的特性。

如果你想拥有完整的 Scala 体验,可以选择通过隐式转换增强 Scala API 的扩展。

要使用所有可用的扩展,你只需为 DataStream API 添加一个简单的引入

  1. import org.apache.flink.streaming.api.scala.extensions._

或者,您可以引入单个扩展 a-là-carte 来使用您喜欢的扩展。

Accept partial functions

通常,DataStream API 不接受匿名模式匹配函数来解构元组、case 类或集合,如下所示:

  1. val data: DataStream[(Int, String, Double)] = // [...]
  2. data.map {
  3. case (id, name, temperature) => // [...]
  4. // The previous line causes the following compilation error:
  5. // "The argument types of an anonymous function must be fully known. (SLS 8.5)"
  6. }

这个扩展在 DataStream Scala API 中引入了新的方法,这些方法在扩展 API 中具有一对一的对应关系。这些委托方法支持匿名模式匹配函数。

DataStream API

MethodOriginalExample
mapWithmap (DataStream)
  1. data.mapWith {
  2. case (, value) => value.toString
  3. }
flatMapWithflatMap (DataStream)
  1. data.flatMapWith {
  2. case (, name, visits) => visits.map(name -> )
  3. }
filterWithfilter (DataStream)
  1. data.filterWith {
  2. case Train(, isOnTime) => isOnTime
  3. }
keyingBykeyBy (DataStream)
  1. data.keyingBy {
  2. case (id, , ) => id
  3. }
mapWithmap (ConnectedDataStream)
  1. data.mapWith(
  2. map1 = case (, value) => value.toString,
  3. map2 = case (, , value, ) => value + 1
  4. )
flatMapWithflatMap (ConnectedDataStream)
  1. data.flatMapWith(
  2. flatMap1 = case (, json) => parse(json),
  3. flatMap2 = case (, , json, ) => parse(json)
  4. )
keyingBykeyBy (ConnectedDataStream)
  1. data.keyingBy(
  2. key1 = case (, timestamp) => timestamp,
  3. key2 = case (id, , ) => id
  4. )
reduceWithreduce (KeyedStream, WindowedStream)
  1. data.reduceWith {
  2. case ((, sum1), (, sum2) => sum1 + sum2
  3. }
projectingapply (JoinedStream)
  1. data1.join(data2).
  2. whereClause(case (pk, ) => pk).
  3. isEqualTo(case (_, fk) => fk).
  4. projecting {
  5. case ((pk, tx), (products, fk)) => tx -> products
  6. }

有关每个方法语义的更多信息, 请参考 DataStream API 文档。

要单独使用此扩展,你可以添加以下引入:

  1. import org.apache.flink.api.scala.extensions.acceptPartialFunctions

用于 DataSet 扩展

  1. import org.apache.flink.streaming.api.scala.extensions.acceptPartialFunctions

下面的代码片段展示了如何一起使用这些扩展方法 (以及 DataSet API) 的最小示例:

  1. object Main {
  2. import org.apache.flink.streaming.api.scala.extensions._
  3. case class Point(x: Double, y: Double)
  4. def main(args: Array[String]): Unit = {
  5. val env = StreamExecutionEnvironment.getExecutionEnvironment
  6. val ds = env.fromElements(Point(1, 2), Point(3, 4), Point(5, 6))
  7. ds.filterWith {
  8. case Point(x, _) => x > 1
  9. }.reduceWith {
  10. case (Point(x1, y1), (Point(x2, y2))) => Point(x1 + y1, x2 + y2)
  11. }.mapWith {
  12. case Point(x, y) => (x, y)
  13. }.flatMapWith {
  14. case (x, y) => Seq("x" -> x, "y" -> y)
  15. }.keyingBy {
  16. case (id, value) => id
  17. }
  18. }
  19. }