Writing Data to HDFS

The PXF HDFS plug-in supports writable external tables using the HdfsTextSimple and SequenceWritable profiles. You might create a writable table to export data from a HAWQ internal table to binary or text HDFS files.

Use the HdfsTextSimple profile when writing text data. Use the SequenceWritable profile when dealing with binary data.

This section describes how to use these PXF profiles to create writable external tables.

Note: Tables that you create with writable profiles can only be used for INSERT operations. If you want to query inserted data, you must define a separate external readable table that references the new HDFS file using the equivalent readable profile.

Prerequisites

Before working with HDFS file data using HAWQ and PXF, ensure that:

  • The HDFS plug-in is installed on all cluster nodes. See Installing PXF Plug-ins for PXF plug-in installation information.
  • All HDFS users have read permissions to HDFS services.
  • HDFS write permissions are provided to a restricted set of users.

Writing to PXF External Tables

The PXF HDFS plug-in supports two writable profiles: HdfsTextSimple and SequenceWritable.

Use the following syntax to create a HAWQ external writable table representing HDFS data:

  1. CREATE WRITABLE EXTERNAL TABLE <table_name>
  2. ( <column_name> <data_type> [, ...] | LIKE <other_table> )
  3. LOCATION ('pxf://<host>[:<port>]/<path-to-hdfs-file>
  4. ?PROFILE=HdfsTextSimple|SequenceWritable[&<custom-option>=<value>[...]]')
  5. FORMAT '[TEXT|CSV|CUSTOM]' (<formatting-properties>);

HDFS-plug-in-specific keywords and values used in the CREATE EXTERNAL TABLE call are described in the table below.

KeywordValue
<host>The PXF host. While <host> may identify any PXF agent node, use the HDFS NameNode as it is guaranteed to be available in a running HDFS cluster. If HDFS High Availability is enabled, <host> must identify the HDFS NameService.
<port>The PXF port. If <port> is omitted, PXF assumes <host> identifies a High Availability HDFS Nameservice and connects to the port number designated by the pxf_service_port server configuration parameter value. Default is 51200.
<path-to-hdfs-file>The path to the file in the HDFS data store.
PROFILEThe PROFILE keyword must specify one of the values HdfsTextSimple or SequenceWritable.
<custom-option><custom-option> is profile-specific. These options are discussed in the next topic.
FORMAT ‘TEXT’Use ’TEXTFORMAT with the HdfsTextSimple profile to create a plain-text-delimited file at the location specified by <path-to-hdfs-file>. The HdfsTextSimpleTEXTFORMAT supports only the built-in (delimiter=<delim>) <formatting-property>.
FORMAT ‘CSV’Use ’CSVFORMAT with the HdfsTextSimple profile to create a comma-separated-value file at the location specified by <path-to-hdfs-file>.
FORMAT ‘CUSTOM’Use the ‘CUSTOM’ FORMAT with the SequenceWritable profile. The SequenceWritableCUSTOMFORMAT supports only the built-in (formatter=’pxfwritable_export) (write) and (formatter=’pxfwritable_import) (read) <formatting-properties>.

Note: When creating PXF external tables, you cannot use the HEADER option in your FORMAT specification.

Custom Options

The HdfsTextSimple and SequenceWritable profiles support the following custom options:

OptionValue DescriptionProfile
COMPRESSION_CODECThe compression codec Java class name. If this option is not provided, no data compression is performed. Supported compression codecs include: org.apache.hadoop.io.compress.DefaultCodec and org.apache.hadoop.io.compress.BZip2CodecHdfsTextSimple, SequenceWritable
COMPRESSION_CODECorg.apache.hadoop.io.compress.GzipCodecHdfsTextSimple
COMPRESSION_TYPEThe compression type to employ; supported values are RECORD (the default) or BLOCK.HdfsTextSimple, SequenceWritable
DATA-SCHEMAThe name of the writer serialization/deserialization class. The jar file in which this class resides must be in the PXF classpath. This option is required for the SequenceWritable profile and has no default value.SequenceWritable
THREAD-SAFEBoolean value determining if a table query can run in multi-threaded mode. The default value is TRUE. Set this option to FALSE to handle all requests in a single thread for operations that are not thread-safe (for example, compression).HdfsTextSimple, SequenceWritable

HdfsTextSimple Profile

Use the HdfsTextSimple profile when writing delimited data to a plain text file where each line is a single record.

Writable tables created using the HdfsTextSimple profile can optionally use record or block compression. The following compression codecs are supported:

  • org.apache.hadoop.io.compress.DefaultCodec
  • org.apache.hadoop.io.compress.GzipCodec
  • org.apache.hadoop.io.compress.BZip2Codec

The HdfsTextSimple profile supports the following <formatting-properties>:

KeywordValue
delimiterThe delimiter character to use when writing the file. Default value is a comma ,.

Example: Writing Data Using the HdfsTextSimple Profile

This example uses the data schema introduced in Example: Using the HdfsTextSimple Profile:

Field NameData Type
locationtext
monthtext
number_of_ordersint
total_salesfloat8

This example also uses the HAWQ table pxf_hdfs_textsimple created in that exercise and expects it to exist.

Perform the following operations to use the PXF HdfsTextSimple profile to create a HAWQ writable external table with the same data schema as defined above. You will also create a separate external readable table to read the associated HDFS file.

  1. Create a writable HAWQ external table with the data schema described above. Write to the HDFS file /data/pxf_examples/pxfwritable_hdfs_textsimple1. Create the table specifying a comma , as the delimiter:

    1. gpadmin=# CREATE WRITABLE EXTERNAL TABLE pxf_hdfs_writabletbl_1(location text, month text, num_orders int, total_sales float8)
    2. LOCATION ('pxf://namenode:51200/data/pxf_examples/pxfwritable_hdfs_textsimple1?PROFILE=HdfsTextSimple')
    3. FORMAT 'TEXT' (delimiter=E',');

    The FORMAT subclause delimiter value is specified as the single ascii comma character ,. E escapes the character.

  2. Write a few records to the pxfwritable_hdfs_textsimple1 HDFS file by invoking the SQL INSERT command on pxf_hdfs_writabletbl_1:

    1. gpadmin=# INSERT INTO pxf_hdfs_writabletbl_1 VALUES ( 'Frankfurt', 'Mar', 777, 3956.98 );
    2. gpadmin=# INSERT INTO pxf_hdfs_writabletbl_1 VALUES ( 'Cleveland', 'Oct', 3812, 96645.37 );
  3. Insert the contents of the pxf_hdfs_textsimple table created in Example: Using the HdfsTextSimple Profile into pxf_hdfs_writabletbl_1:

    1. gpadmin=# INSERT INTO pxf_hdfs_writabletbl_1 SELECT * FROM pxf_hdfs_textsimple;
  4. View the file contents in HDFS:

    1. $ hdfs dfs -cat /data/pxf_examples/pxfwritable_hdfs_textsimple1/*
    2. Frankfurt,Mar,777,3956.98
    3. Cleveland,Oct,3812,96645.37
    4. Prague,Jan,101,4875.33
    5. Rome,Mar,87,1557.39
    6. Bangalore,May,317,8936.99
    7. Beijing,Jul,411,11600.67

    Because you specified comma , as the delimiter, this character is the field separator used in each record of the HDFS file.

  5. Querying an external writable table is not supported in HAWQ. To query data from the newly-created HDFS file, create a readable external HAWQ table referencing the HDFS file:

    1. gpadmin=# CREATE EXTERNAL TABLE pxf_hdfs_textsimple_r1(location text, month text, num_orders int, total_sales float8)
    2. LOCATION ('pxf://namenode:51200/data/pxf_examples/pxfwritable_hdfs_textsimple1?PROFILE=HdfsTextSimple')
    3. FORMAT 'CSV';

    Specify the 'CSV' FORMAT for the readable table, because you created the writable table with a comma , as the delimiter character.

  6. Query the readable external table pxf_hdfs_textsimple_r1:

    1. gpadmin=# SELECT * FROM pxf_hdfs_textsimple_r1;
    1. location | month | num_orders | total_sales
    2. -----------+-------+------------+-------------
    3. Frankfurt | Mar | 777 | 3956.98
    4. Cleveland | Oct | 3812 | 96645.37
    5. Prague | Jan | 101 | 4875.33
    6. Rome | Mar | 87 | 1557.39
    7. Bangalore | May | 317 | 8936.99
    8. Beijing | Jul | 411 | 11600.67
    9. (6 rows)

    The table includes the records you individually inserted, as well as the full contents of the pxf_hdfs_textsimple table.

  7. Create a second HAWQ external writable table, this time using Gzip compression and employing a colon : as the delimiter:

    1. gpadmin=# CREATE WRITABLE EXTERNAL TABLE pxf_hdfs_writabletbl_2 (location text, month text, num_orders int, total_sales float8)
    2. LOCATION ('pxf://namenode:51200/data/pxf_examples/pxfwritable_hdfs_textsimple2?PROFILE=HdfsTextSimple&COMPRESSION_CODEC=org.apache.hadoop.io.compress.GzipCodec')
    3. FORMAT 'TEXT' (delimiter=E':');
  8. Write a few records to the pxfwritable_hdfs_textsimple2 HDFS file by inserting into the pxf_hdfs_writabletbl_2 table:

    1. gpadmin=# INSERT INTO pxf_hdfs_writabletbl_2 VALUES ( 'Frankfurt', 'Mar', 777, 3956.98 );
    2. gpadmin=# INSERT INTO pxf_hdfs_writabletbl_2 VALUES ( 'Cleveland', 'Oct', 3812, 96645.37 );
  9. View the file contents in HDFS; use the -text option to hdfs dfs to view the compressed data as text:

    1. $ hdfs dfs -text /data/pxf_examples/pxfwritable_hdfs_textsimple2/*
    2. Frankfurt:Mar:777:3956.98
    3. Cleveland:Oct:3812:96645.3

    Notice that the colon : is the field separator in the HDFS file.

    As described in Step 5 above, to query data from the newly-created HDFS file named pxfwritable_hdfs_textsimple2, you can create a readable external HAWQ table referencing this HDFS file.

SequenceWritable Profile

Use the HDFS plug-in SequenceWritable profile when writing SequenceFile format files. Files of this type consist of binary key/value pairs. Sequence files are a common data transfer format between MapReduce jobs.

SequenceFile format files can optionally use record or block compression. The following compression codecs are supported:

  • org.apache.hadoop.io.compress.DefaultCodec
  • org.apache.hadoop.io.compress.BZip2Codec

When using the SequenceWritable profile to write a SequenceFile format file, you must provide the name of the Java class to use for serializing/deserializing the data. This class must provide read and write methods for the fields in the schema associated with the data.

Example: Writing Data Using the SequenceWritable Profile

In this example, you will create a Java class named PxfExample_CustomWritable that will serialize/deserialize the fields in the sample schema used in previous examples. You will then use this class to access a writable external table created with the SequenceWritable profile.

Perform the following steps to create the Java class and writable table.

  1. Prepare to create the sample Java class:

    1. $ mkdir -p pxfex/com/hawq/example/pxf/hdfs/writable/dataschema
    2. $ cd pxfex/com/hawq/example/pxf/hdfs/writable/dataschema
    3. $ vi PxfExample_CustomWritable.java
  2. Copy and paste the following text into the PxfExample_CustomWritable.java file:

    1. package com.hawq.example.pxf.hdfs.writable.dataschema;
    2. import org.apache.hadoop.io.*;
    3. import java.io.DataInput;
    4. import java.io.DataOutput;
    5. import java.io.IOException;
    6. import java.lang.reflect.Field;
    7. /**
    8. * PxfExample_CustomWritable class - used to serialize and deserialize data with
    9. * text, int, and float data types
    10. */
    11. public class PxfExample_CustomWritable implements Writable {
    12. public String st1, st2;
    13. public int int1;
    14. public float ft;
    15. public PxfExample_CustomWritable() {
    16. st1 = new String("");
    17. st2 = new String("");
    18. int1 = 0;
    19. ft = 0.f;
    20. }
    21. public PxfExample_CustomWritable(int i1, int i2, int i3) {
    22. st1 = new String("short_string___" + i1);
    23. st2 = new String("short_string___" + i1);
    24. int1 = i2;
    25. ft = i1 * 10.f * 2.3f;
    26. }
    27. String GetSt1() {
    28. return st1;
    29. }
    30. String GetSt2() {
    31. return st2;
    32. }
    33. int GetInt1() {
    34. return int1;
    35. }
    36. float GetFt() {
    37. return ft;
    38. }
    39. @Override
    40. public void write(DataOutput out) throws IOException {
    41. Text txt = new Text();
    42. txt.set(st1);
    43. txt.write(out);
    44. txt.set(st2);
    45. txt.write(out);
    46. IntWritable intw = new IntWritable();
    47. intw.set(int1);
    48. intw.write(out);
    49. FloatWritable fw = new FloatWritable();
    50. fw.set(ft);
    51. fw.write(out);
    52. }
    53. @Override
    54. public void readFields(DataInput in) throws IOException {
    55. Text txt = new Text();
    56. txt.readFields(in);
    57. st1 = txt.toString();
    58. txt.readFields(in);
    59. st2 = txt.toString();
    60. IntWritable intw = new IntWritable();
    61. intw.readFields(in);
    62. int1 = intw.get();
    63. FloatWritable fw = new FloatWritable();
    64. fw.readFields(in);
    65. ft = fw.get();
    66. }
    67. public void printFieldTypes() {
    68. Class myClass = this.getClass();
    69. Field[] fields = myClass.getDeclaredFields();
    70. for (int i = 0; i < fields.length; i++) {
    71. System.out.println(fields[i].getType().getName());
    72. }
    73. }
    74. }
  3. Compile and create a Java class JAR file for PxfExample_CustomWritable:

    1. $ javac -classpath /usr/hdp/2.5.3.0-37/hadoop/hadoop-common.jar PxfExample_CustomWritable.java
    2. $ cd ../../../../../../../
    3. $ jar cf pxfex-customwritable.jar com
    4. $ cp pxfex-customwritable.jar /tmp/

    (Your Hadoop library classpath may differ.)

  4. Include the new jar file in the PXF Agent classpath by updating the pxf-public.classpath file. If you use Ambari to manage your cluster, add the following line via the Ambari UI and restart the PXF Agent:

    1. /tmp/pxfex-customwritable.jar

    If you have a command-line-managed HAWQ cluster, perform the following steps on each node in your HAWQ cluster:

    • Directly edit /etc/pxf/conf/pxf-public.classpath and add the line above.
    • Restart the PXF Agent:

      1. $ sudo service pxf-service restart
  5. Use the PXF SequenceWritable profile to create a writable HAWQ external table. Identify the serialization/deserialization Java class you created above in the DATA-SCHEMA <custom-option>. Use BLOCK mode compression with BZip2 when creating the writable table.

    1. gpadmin=# CREATE WRITABLE EXTERNAL TABLE pxf_tbl_seqwrit (location text, month text, number_of_orders integer, total_sales real)
    2. LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_seqwrit_file?PROFILE=SequenceWritable&DATA-SCHEMA=com.hawq.example.pxf.hdfs.writable.dataschema.PxfExample_CustomWritable&COMPRESSION_TYPE=BLOCK&COMPRESSION_CODEC=org.apache.hadoop.io.compress.BZip2Codec')
    3. FORMAT 'CUSTOM' (formatter='pxfwritable_export');

    Notice that the 'CUSTOM' FORMAT <formatting-properties> specify the built-in pxfwritable_export formatter.

  6. Insert some data into pxf_tbl_seqwrit:

    1. gpadmin=# INSERT INTO pxf_tbl_seqwrit VALUES ( 'Frankfurt', 'Mar', 777, 3956.98 );
    2. gpadmin=# INSERT INTO pxf_tbl_seqwrit VALUES ( 'Cleveland', 'Oct', 3812, 96645.37 );
  7. Recall that querying an external writable table is not supported in HAWQ. To read the newly-created writable table, create a HAWQ readable external table referencing the writable table’s HDFS file:

    1. gpadmin=# CREATE EXTERNAL TABLE read_pxf_tbl_seqwrit (location text, month text, number_of_orders integer, total_sales real)
    2. LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_seqwrit_file?PROFILE=SequenceWritable&DATA-SCHEMA=com.hawq.example.pxf.hdfs.writable.dataschema.PxfExample_CustomWritable')
    3. FORMAT 'CUSTOM' (formatter='pxfwritable_import');

    The DATA-SCHEMA <custom-option> must be specified when reading an HDFS file via the SequenceWritable profile. Compression-related options need not be provided.

  8. Query the readable external table read_pxf_tbl_seqwrit:

    1. gpadmin=# SELECT * FROM read_pxf_tbl_seqwrit;
    1. location | month | number_of_orders | total_sales
    2. -----------+-------+------------------+-------------
    3. Frankfurt | Mar | 777 | 3956.98
    4. Cleveland | Oct | 3812 | 96645.4
    5. (2 rows)

Reading the Record Key

When a HAWQ external table references a SequenceFile or another file format that store rows in a key-value format, you can access the key values in HAWQ queries by using the recordkey keyword as a field name.

The field type of recordkey must correspond to the key type, much as the other fields must match the HDFS data.

recordkey can be any of the following Hadoop types:

  • BooleanWritable
  • ByteWritable
  • DoubleWritable
  • FloatWritable
  • IntWritable
  • LongWritable
  • Text

Example: Using Record Keys

Create an external readable table to access the record keys from the writable table pxf_tbl_seqwrit that you created in Example: Writing Data Using the SequenceWritable Profile. The recordkey is of type int8.

  1. gpadmin=# CREATE EXTERNAL TABLE read_pxf_tbl_seqwrit_RECKEY (recordkey int8, location text, month text, number_of_orders integer, total_sales real)
  2. LOCATION ('pxf://namenode:51200/data/pxf_examples/pxf_seqwrit_file?PROFILE=SequenceWritable&DATA-SCHEMA=com.hawq.example.pxf.hdfs.writable.dataschema.PxfExample_CustomWritable')
  3. FORMAT 'CUSTOM' (formatter='pxfwritable_import');
  4. gpadmin=# SELECT * FROM read_pxf_tbl_seqwrit_RECKEY;
  1. recordkey | location | month | number_of_orders | total_sales
  2. -----------+-------------+-------+------------------+-------------
  3. 0 | Frankfurt | Mar | 777 | 3956.98
  4. 0 | Cleveland | Oct | 3812 | 96645.4
  5. (2 rows)

The recordkey is 0 because you did not identify a record key when you inserted entries into the writable table.