Spark-TsFile

About Spark-TsFile-Connector

Spark-TsFile-Connector implements the support of Spark for external data sources of Tsfile type. This enables users to read, write and query Tsfile by Spark.

With this connector, you can

  • load a single TsFile, from either the local file system or hdfs, into Spark
  • load all files in a specific directory, from either the local file system or hdfs, into Spark
  • write data from Spark into TsFile

System Requirements

Spark VersionScala VersionJava VersionTsFile
2.4.32.11.81.80.13.0-SNAPSHOT

Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/iotdb/tree/master/tsfile. Currently we only support spark version 2.4.3 and there are some known issue on 2.4.7, do no use it

Quick Start

Local Mode

Start Spark with TsFile-Spark-Connector in local mode:

  1. ./<spark-shell-path> --jars tsfile-spark-connector.jar,tsfile-{version}-jar-with-dependencies.jar,hadoop-tsfile-{version}-jar-with-dependencies.jar

Note:

Distributed Mode

Start Spark with TsFile-Spark-Connector in distributed mode (That is, the spark cluster is connected by spark-shell):

  1. . /<spark-shell-path> --jars tsfile-spark-connector.jar,tsfile-{version}-jar-with-dependencies.jar,hadoop-tsfile-{version}-jar-with-dependencies.jar --master spark://ip:7077

Note:

Data Type Correspondence

TsFile data typeSparkSQL data type
BOOLEANBooleanType
INT32IntegerType
INT64LongType
FLOATFloatType
DOUBLEDoubleType
TEXTStringType

Schema Inference

The way to display TsFile is dependent on the schema. Take the following TsFile structure as an example: There are three measurements in the TsFile schema: status, temperature, and hardware. The basic information of these three measurements is listed:

NameTypeEncode
statusBooleanPLAIN
temperatureFloatRLE
hardwareTextPLAIN

The existing data in the TsFile are:

ST 1

The corresponding SparkSQL table is:

timeroot.ln.wf02.wt02.temperatureroot.ln.wf02.wt02.statusroot.ln.wf02.wt02.hardwareroot.ln.wf01.wt01.temperatureroot.ln.wf01.wt01.statusroot.ln.wf01.wt01.hardware
1nulltruenull2.2truenull
2nullfalseaaa2.2nullnull
3nullnullnull2.1truenull
4nulltruebbbnullnullnull
5nullnullnullnullfalsenull
6nullnullcccnullnullnull

You can also use narrow table form which as follows: (You can see part 6 about how to use narrow form)

timedevice_namestatushardwaretemperature
1root.ln.wf02.wt01truenull2.2
1root.ln.wf02.wt02truenullnull
2root.ln.wf02.wt01nullnull2.2
2root.ln.wf02.wt02falseaaanull
3root.ln.wf02.wt01truenull2.1
4root.ln.wf02.wt02truebbbnull
5root.ln.wf02.wt01falsenullnull
6root.ln.wf02.wt02nullcccnull

Scala API

NOTE: Remember to assign necessary read and write permissions in advance.

  • Example 1: read from the local file system
  1. import org.apache.iotdb.spark.tsfile._
  2. val wide_df = spark.read.tsfile("test.tsfile")
  3. wide_df.show
  4. val narrow_df = spark.read.tsfile("test.tsfile", true)
  5. narrow_df.show
  • Example 2: read from the hadoop file system
  1. import org.apache.iotdb.spark.tsfile._
  2. val wide_df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile")
  3. wide_df.show
  4. val narrow_df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
  5. narrow_df.show
  • Example 3: read from a specific directory
  1. import org.apache.iotdb.spark.tsfile._
  2. val df = spark.read.tsfile("hdfs://localhost:9000/usr/hadoop")
  3. df.show

Note 1: Global time ordering of all TsFiles in a directory is not supported now.

Note 2: Measurements of the same name should have the same schema.

  • Example 4: query in wide form
  1. import org.apache.iotdb.spark.tsfile._
  2. val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile")
  3. df.createOrReplaceTempView("tsfile_table")
  4. val newDf = spark.sql("select * from tsfile_table where `device_1.sensor_1`>0 and `device_1.sensor_2` < 22")
  5. newDf.show
  1. import org.apache.iotdb.spark.tsfile._
  2. val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile")
  3. df.createOrReplaceTempView("tsfile_table")
  4. val newDf = spark.sql("select count(*) from tsfile_table")
  5. newDf.show
  • Example 5: query in narrow form
  1. import org.apache.iotdb.spark.tsfile._
  2. val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
  3. df.createOrReplaceTempView("tsfile_table")
  4. val newDf = spark.sql("select * from tsfile_table where device_name = 'root.ln.wf02.wt02' and temperature > 5")
  5. newDf.show
  1. import org.apache.iotdb.spark.tsfile._
  2. val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
  3. df.createOrReplaceTempView("tsfile_table")
  4. val newDf = spark.sql("select count(*) from tsfile_table")
  5. newDf.show
  • Example 6: write in wide form
  1. // we only support wide_form table to write
  2. import org.apache.iotdb.spark.tsfile._
  3. val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile")
  4. df.show
  5. df.write.tsfile("hdfs://localhost:9000/output")
  6. val newDf = spark.read.tsfile("hdfs://localhost:9000/output")
  7. newDf.show
  • Example 7: write in narrow form
  1. // we only support wide_form table to write
  2. import org.apache.iotdb.spark.tsfile._
  3. val df = spark.read.tsfile("hdfs://localhost:9000/test.tsfile", true)
  4. df.show
  5. df.write.tsfile("hdfs://localhost:9000/output", true)
  6. val newDf = spark.read.tsfile("hdfs://localhost:9000/output", true)
  7. newDf.show

Appendix A: Old Design of Schema Inference

The way to display TsFile is related to TsFile Schema. Take the following TsFile structure as an example: There are three measurements in the Schema of TsFile: status, temperature, and hardware. The basic info of these three Measurements is:

NameTypeEncode
statusBooleanPLAIN
temperatureFloatRLE
hardwareTextPLAIN

The existing data in the file are:

ST 2

A set of time-series data

There are two ways to show a set of time-series data:

  • the default way

Two columns are created to store the full path of the device: time(LongType) and delta_object(StringType).

  • time : Timestamp, LongType
  • delta_object : Delta_object ID, StringType

Next, a column is created for each Measurement to store the specific data. The SparkSQL table structure is:

time(LongType)delta_object(StringType)status(BooleanType)temperature(FloatType)hardware(StringType)
1root.ln.wf01.wt01True2.2null
1root.ln.wf02.wt02Truenullnull
2root.ln.wf01.wt01null2.2null
2root.ln.wf02.wt02Falsenull“aaa”
2root.sgcc.wf03.wt01Truenullnull
3root.ln.wf01.wt01True2.1null
3root.sgcc.wf03.wt01True3.3null
4root.ln.wf01.wt01null2.0null
4root.ln.wf02.wt02Truenull“bbb”
4root.sgcc.wf03.wt01Truenullnull
5root.ln.wf01.wt01Falsenullnull
5root.ln.wf02.wt02Falsenullnull
5root.sgcc.wf03.wt01Truenullnull
6root.ln.wf02.wt02nullnull“ccc”
6root.sgcc.wf03.wt01null6.6null
7root.ln.wf01.wt01Truenullnull
8root.ln.wf02.wt02nullnull“ddd”
8root.sgcc.wf03.wt01null8.8null
9root.sgcc.wf03.wt01null9.9null
  • unfold delta_object column

Expand the device column by “.” into multiple columns, ignoring the root directory “root”. Convenient for richer aggregation operations. To use this display way, the parameter “delta_object_name” is set in the table creation statement (refer to Example 5 in Section 5.1 of this manual), as in this example, parameter “delta_object_name” is set to “root.device.turbine”. The number of path layers needs to be one-to-one. At this point, one column is created for each layer of the device path except the “root” layer. The column name is the name in the parameter and the value is the name of the corresponding layer of the device. Next, one column is created for each Measurement to store the specific data.

Then SparkSQL Table Structure is as follows:

time(LongType)group(StringType)field(StringType)device(StringType)status(BooleanType)temperature(FloatType)hardware(StringType)
1lnwf01wt01True2.2null
1lnwf02wt02Truenullnull
2lnwf01wt01null2.2null
2lnwf02wt02Falsenull“aaa”
2sgccwf03wt01Truenullnull
3lnwf01wt01True2.1null
3sgccwf03wt01True3.3null
4lnwf01wt01null2.0null
4lnwf02wt02Truenull“bbb”
4sgccwf03wt01Truenullnull
5lnwf01wt01Falsenullnull
5lnwf02wt02Falsenullnull
5sgccwf03wt01Truenullnull
6lnwf02wt02nullnull“ccc”
6sgccwf03wt01null6.6null
7lnwf01wt01Truenullnull
8lnwf02wt02nullnull“ddd”
8sgccwf03wt01null8.8null
9sgccwf03wt01null9.9null

TsFile-Spark-Connector displays one or more TsFiles as a table in SparkSQL By SparkSQL. It also allows users to specify a single directory or use wildcards to match multiple directories. If there are multiple TsFiles, the union of the measurements in all TsFiles will be retained in the table, and the measurement with the same name have the same data type by default. Note that if a situation with the same name but different data types exists, TsFile-Spark-Connector does not guarantee the correctness of the results.

The writing process is to write a DataFrame as one or more TsFiles. By default, two columns need to be included: time and delta_object. The rest of the columns are used as Measurement. If user wants to write the second table structure back to TsFile, user can set the “delta_object_name” parameter(refer to Section 5.1 of Section 5.1 of this manual).

Appendix B: Old Note NOTE: Check the jar packages in the root directory of your Spark and replace libthrift-0.9.2.jar and libfb303-0.9.2.jar with libthrift-0.9.1.jar and libfb303-0.9.1.jar respectively.